Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara
The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…
Li, Yuan Hang; Tottenham, Nim
A growing literature suggests that the self-face is involved in processing the facial expressions of others. The authors experimentally activated self-face representations to assess its effects on the recognition of dynamically emerging facial expressions of others. They exposed participants to videos of either their own faces (self-face prime) or faces of others (nonself-face prime) prior to a facial expression judgment task. Their results show that experimentally activating self-face representations results in earlier recognition of dynamically emerging facial expression. As a group, participants in the self-face prime condition recognized expressions earlier (when less affective perceptual information was available) compared to participants in the nonself-face prime condition. There were individual differences in performance, such that poorer expression identification was associated with higher autism traits (in this neurocognitively healthy sample). However, when randomized into the self-face prime condition, participants with high autism traits performed as well as those with low autism traits. Taken together, these data suggest that the ability to recognize facial expressions in others is linked with the internal representations of our own faces. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Pantic, Maja; Li, S.; Jain, A.
Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial
Calder, A J; Rowland, D; Young, A W; Nimmo-Smith, I; Keane, J; Perrett, D I
The physical differences between facial expressions (e.g. fear) and a reference norm (e.g. a neutral expression) were altered to produce photographic-quality caricatures. In Experiment 1, participants rated caricatures of fear, happiness and sadness for their intensity of these three emotions; a second group of participants rated how 'face-like' the caricatures appeared. With increasing levels of exaggeration the caricatures were rated as more emotionally intense, but less 'face-like'. Experiment 2 demonstrated a similar relationship between emotional intensity and level of caricature for six different facial expressions. Experiments 3 and 4 compared intensity ratings of facial expression caricatures prepared relative to a selection of reference norms - a neutral expression, an average expression, or a different facial expression (e.g. anger caricatured relative to fear). Each norm produced a linear relationship between caricature and rated intensity of emotion; this finding is inconsistent with two-dimensional models of the perceptual representation of facial expression. An exemplar-based multidimensional model is proposed as an alternative account.
This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.
Research into emotions has increased in recent decades, especially on the subject of recognition of emotions. However, studies of the facial expressions of emotion were compromised by technical problems with visible video analysis and electromyography in experimental settings. These have only recently been overcome. There have been new developments in the field of automated computerized facial recognition; allowing real-time identification of facial expression in social environments. This review addresses three approaches to measuring facial expression of emotion and describes their specific contributions to understanding emotion in the healthy population and in persons with mental illness. Despite recent progress, studies on human emotions have been hindered by the lack of consensus on an emotion theory suited to examining the dynamic aspects of emotion and its expression. Studying expression of emotion in patients with mental health conditions for diagnostic and therapeutic purposes will profit from theoretical and methodological progress.
Michaud, Thierry; Gassia, Véronique; Belhaouari, Lakhdar
Facial expressions convey emotions that form the foundation of interpersonal relationships, and many of these emotions promote and regulate our social linkages. Hence, the facial aging symptomatological analysis and the treatment plan must of necessity include knowledge of the facial dynamics and the emotional expressions of the face. This approach aims to more closely meet patients' expectations of natural-looking results, by correcting age-related negative expressions while observing the emotional language of the face. This article will successively describe patients' expectations, the role of facial expressions in relational dynamics, the relationship between facial structures and facial expressions, and the way facial aging mimics negative expressions. Eventually, therapeutic implications for facial aging treatment will be addressed. © 2015 Wiley Periodicals, Inc.
Theobald, Barry-John; Matthews, Iain; Mangini, Michael; Spies, Jeffrey R.; Brick, Timothy R.; Cohn, Jeffrey F.; Boker, Steven M.
Nonverbal visual cues accompany speech to supplement the meaning of spoken words, signify emotional state, indicate position in discourse, and provide back-channel feedback. This visual information includes head movements, facial expressions and body gestures. In this article we describe techniques for manipulating both verbal and nonverbal facial…
Tobitani, Kensuke; Kato, Kunihito; Yamamoto, Kazuhiko
In this study, we focused on the basic taste stimulation for the analysis of real facial expressions. We considered that the expressions caused by taste stimulation were unaffected by individuality or emotion, that is, such expressions were involuntary. We analyzed the movement of facial muscles by taste stimulation and compared real expressions with artificial expressions. From the result, we identified an obvious difference between real and artificial expressions. Thus, our method would be a new approach for facial expression recognition.
Full Text Available Abstract Background Previous reports have suggested impairment in facial expression recognition in delinquents, but controversy remains with respect to how such recognition is impaired. To address this issue, we investigated facial expression recognition in delinquents in detail. Methods We tested 24 male adolescent/young adult delinquents incarcerated in correctional facilities. We compared their performances with those of 24 age- and gender-matched control participants. Using standard photographs of facial expressions illustrating six basic emotions, participants matched each emotional facial expression with an appropriate verbal label. Results Delinquents were less accurate in the recognition of facial expressions that conveyed disgust than were control participants. The delinquents misrecognized the facial expressions of disgust as anger more frequently than did controls. Conclusion These results suggest that one of the underpinnings of delinquency might be impaired recognition of emotional facial expressions, with a specific bias toward interpreting disgusted expressions as hostile angry expressions.
Florkiewicz, Brittany; Skollar, Gabriella; Reichard, Ulrich H
Facial expressions are an important component of primate communication that functions to transmit social information and modulate intentions and motivations. Chimpanzees and macaques, for example, produce a variety of facial expressions when communicating with conspecifics. Hylobatids also produce various facial expressions; however, the origin and function of these facial expressions are still largely unclear. It has been suggested that larger facial expression repertoires may have evolved in the context of social complexity, but this link has yet to be tested at a broader empirical basis. The social complexity hypothesis offers a possible explanation for the evolution of complex communicative signals such as facial expressions, because as the complexity of an individual's social environment increases so does the need for communicative signals. We used an intraspecies, pair-focused study design to test the link between facial expressions and sociality within hylobatids, specifically the strength of pair-bonds. The current study compared 206 hr of video and 103 hr of focal animal data for ten hylobatid pairs from three genera (Nomascus, Hoolock, and Hylobates) living at the Gibbon Conservation Center. Using video footage, we explored 5,969 facial expressions along three dimensions: repertoire use, repertoire breadth, and facial expression synchrony [FES]. We then used focal animal data to compare dimensions of facial expressiveness to pair bond strength and behavioral synchrony. Hylobatids in our study overlapped in only half of their facial expressions (50%) with the only other detailed, quantitative study of hylobatid facial expressions, while 27 facial expressions were uniquely observed in our study animals. Taken together, hylobatids have a large facial expression repertoire of at least 80 unique facial expressions. Contrary to our prediction, facial repertoire composition was not significantly correlated with pair bond strength, rates of territorial synchrony
Shan, Caifeng; Braspenning, Ralph
Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person's internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin  was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists . Psychological studies [48, 3] suggest that facial expressions, as the main mode for nonverbal communication, play a vital role in human face-to-face communication. For illustration, we show some examples of facial expressions in Fig. 1.
Sacco, Donald F; Hugenberg, Kurt
The current series of studies provide converging evidence that facial expressions of fear and anger may have co-evolved to mimic mature and babyish faces in order to enhance their communicative signal. In Studies 1 and 2, fearful and angry facial expressions were manipulated to have enhanced babyish features (larger eyes) or enhanced mature features (smaller eyes) and in the context of a speeded categorization task in Study 1 and a visual noise paradigm in Study 2, results indicated that larger eyes facilitated the recognition of fearful facial expressions, while smaller eyes facilitated the recognition of angry facial expressions. Study 3 manipulated facial roundness, a stable structure that does not vary systematically with expressions, and found that congruency between maturity and expression (narrow face-anger; round face-fear) facilitated expression recognition accuracy. Results are discussed as representing a broad co-evolutionary relationship between facial maturity and fearful and angry facial expressions. (c) 2009 APA, all rights reserved
Scheider, Linda; Waller, Bridget M.; Oña, Leonardo; Burrows, Anne M.; Liebal, Katja
Non-human primates use various communicative means in interactions with others. While primate gestures are commonly considered to be intentionally and flexibly used signals, facial expressions are often referred to as inflexible, automatic expressions of affective internal states. To explore whether and how non-human primates use facial expressions in specific communicative interactions, we studied five species of small apes (gibbons) by employing a newly established Facial Action Coding System for hylobatid species (GibbonFACS). We found that, despite individuals often being in close proximity to each other, in social (as opposed to non-social contexts) the duration of facial expressions was significantly longer when gibbons were facing another individual compared to non-facing situations. Social contexts included grooming, agonistic interactions and play, whereas non-social contexts included resting and self-grooming. Additionally, gibbons used facial expressions while facing another individual more often in social contexts than non-social contexts where facial expressions were produced regardless of the attentional state of the partner. Also, facial expressions were more likely ‘responded to’ by the partner’s facial expressions when facing another individual than non-facing. Taken together, our results indicate that gibbons use their facial expressions differentially depending on the social context and are able to use them in a directed way in communicative interactions with other conspecifics. PMID:26978660
Broekens, Joost; Qu, Chao; Brinkman, Willem-Paul
Facial emotion expression for virtual characters is used in a wide variety of areas. Often, the primary reason to use emotion expression is not to study emotion expression generation per se, but to use emotion expression in an application or research project. What is then needed is an easy to use and flexible, but also validated mechanism to do so. In this report we present such a mechanism. It enables developers to build virtual characters with dynamic affective facial expressions. The mecha...
Doroftei, I.; Adascalitei, F.; Lefeber, D.; Vanderborght, B.; Doroftei, I. A.
The purpose of this study is to present the preliminary steps in facial expressions recognition with a new version of an expressive social robotic head. So, in a first phase, our main goal was to reach a minimum level of emotional expressiveness in order to obtain nonverbal communication between the robot and human by building six basic facial expressions. To evaluate the facial expressions, the robot was used in some preliminary user studies, among children and adults.
This article addresses the debate between emotion-expression and motive-communication approaches to facial movements, focusing on Ekman's (1972) and Fridlund's (1994) contrasting models and their historical antecedents. Available evidence suggests that the presence of others either reduces or increases facial responses, depending on the quality and strength of the emotional manipulation and on the nature of the relationship between interactants. Although both display rules and social motives provide viable explanations of audience "inhibition" effects, some audience facilitation effects are less easily accommodated within an emotion-expression perspective. In particular, emotion is not a sufficient condition for a corresponding "expression," even discounting explicit regulation, and, apparently, "spontaneous" facial movements may be facilitated by the presence of others. Further, there is no direct evidence that any particular facial movement provides an unambiguous expression of a specific emotion. However, information communicated by facial movements is not necessarily extrinsic to emotion. Facial movements not only transmit emotion-relevant information but also contribute to ongoing processes of emotional action in accordance with pragmatic theories.
Liu, C.; Ham, J.R.C.; Postma, E.O.; Midden, C.J.H.; Joosten, B.; Goudbeek, M.
Affective robots and embodied conversational agents require convincing facial expressions to make them socially acceptable. To be able to virtually generate facial expressions, we need to investigate the relationship between technology and human perception of affective and social signals. Facial
Full Text Available Facial expressions communicate non-verbal cues which play an important role in interpersonal relations. Automatic recognition of facial expressions can be an important element of normal human-machine interfaces it might likewise be utilized as a part of behavioral science and in clinical practice. In spite of the fact that people perceive facial expressions for all intents and purposes immediately solid expression recognition by machine is still a challenge. From the point of view of automatic recognition a facial expression can be considered to comprise of disfigurements of the facial parts and their spatial relations or changes in the faces pigmentation. Research into automatic recognition of the facial expressions addresses the issues encompassing the representation and arrangement of static or dynamic qualities of these distortions or face pigmentation. We get results by utilizing the CVIPtools. We have taken train data set of six facial expressions of three persons and for train data set purpose we have total border mask sample 90 and 30 border mask sample for test data set purpose and we use RST- Invariant features and texture features for feature analysis and then classified them by using k- Nearest Neighbor classification algorithm. The maximum accuracy is 90.
Full Text Available It is well known that memory can be modulated by emotional stimuli at the time of encoding and consolidation. For example, happy faces create better identity recognition than faces with certain other expressions. However, the influence of facial expression at the time of retrieval remains unknown in the literature. To separate the potential influence of expression at retrieval from its effects at earlier stages, we had participants learn neutral faces but manipulated facial expression at the time of memory retrieval in a standard old/new recognition task. The results showed a clear effect of facial expression, where happy test faces were identified more successfully than angry test faces. This effect is unlikely due to greater image similarity between the neutral learning face and the happy test face, because image analysis showed that the happy test faces are in fact less similar to the neutral learning faces relative to the angry test faces. In the second experiment, we investigated whether this emotional effect is influenced by the expression at the time of learning. We employed angry or happy faces as learning stimuli, and angry, happy, and neutral faces as test stimuli. The results showed that the emotional effect at retrieval is robust across different encoding conditions with happy or angry expressions. These findings indicate that emotional expressions affect the retrieval process in identity recognition, and identity recognition does not rely on emotional association between learning and test faces.
Egede, Joy Onyekachukwu
The face is one of the most important means of non-verbal communication. A lot of information can be gotten about the emotional state of a person just by merely observing their facial expression. This is relatively easy in face to face communication but not so in human computer interaction. Supervised learning has been widely used by researchers to train machines to recognise facial expressions just like humans. However, supervised learning has significant limitations one of which is the fact...
Tanja S. H. Wingenbach
Full Text Available According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a explicit imitation of viewed facial emotional expressions (stimulus-congruent condition, (b pen-holding with the lips (stimulus-incongruent condition, and (c passive viewing (control condition. It was hypothesised that (1 experimental condition (a and (b result in greater facial muscle activity than (c, (2 experimental condition (a increases emotion recognition accuracy from others’ faces compared to (c, (3 experimental condition (b lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c. Participants (42 males, 42 females underwent a facial emotion recognition experiment (ADFES-BIV while electromyography (EMG was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.
Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris
According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.
Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris
According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240
Full Text Available The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision to investigate the processing of a wider range of natural
Kaulard, Kathrin; Cunningham, Douglas W.; Bülthoff, Heinrich H.; Wallraven, Christian
The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions
Fiorentini, Chiara; Schmidt, Susanna; Viviani, Paolo
We asked whether the identification of emotional facial expressions (FEs) involves the simultaneous perception of the facial configuration or the detection of emotion-specific diagnostic cues. We recorded at high speed (500 frames s-1) the unfolding of the FE in five actors, each expressing six emotions (anger, surprise, happiness, disgust, fear, sadness). Recordings were coded every 10 frames (20 ms of real time) with the Facial Action Coding System (FACS, Ekman et al 2002, Salt Lake City, UT: Research Nexus eBook) to identify the facial actions contributing to each expression, and their intensity changes over time. Recordings were shown in slow motion (1/20 of recording speed) to one hundred observers in a forced-choice identification task. Participants were asked to identify the emotion during the presentation as soon as they felt confident to do so. Responses were recorded along with the associated response times (RTs). The RT probability density functions for both correct and incorrect responses were correlated with the facial activity during the presentation. There were systematic correlations between facial activities, response probabilities, and RT peaks, and significant differences in RT distributions for correct and incorrect answers. The results show that a reliable response is possible long before the full FE configuration is reached. This suggests that identification is reached by integrating in time individual diagnostic facial actions, and does not require perceiving the full apex configuration.
Maher, Stephen; Ekstrom, Tor; Chen, Yue
Perception of subtle facial expressions is essential for social functioning; yet it is unclear if human perceptual sensitivities differ in detecting varying types of facial emotions. Evidence diverges as to whether salient negative versus positive emotions (such as sadness versus happiness) are preferentially processed. Here, we measured perceptual thresholds for the detection of four types of emotion in faces--happiness, fear, anger, and sadness--using psychophysical methods. We also evaluated the association of the perceptual performances with facial morphological changes between neutral and respective emotion types. Human observers were highly sensitive to happiness compared with the other emotional expressions. Further, this heightened perceptual sensitivity to happy expressions can be attributed largely to the emotion-induced morphological change of a particular facial feature (end-lip raise).
Full Text Available Previous studies have discovered a fascinating phenomenon known as choice blindness—individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1 and faces with happy and neutral expressions (Experiment 2 in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral expressions also tended to show a lower detection rate of sad (as compared to neutral faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions.
Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng
Previous studies have discovered a fascinating phenomenon known as choice blindness-individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions).
In terms of communication, postures and facial expressions of such feelings like happiness, anger and sadness play important roles in conveying information. With the development of the technology, recently a number of algorithms dealing with face alignment, face landmark detection, classification, facial landmark localization and pose estimation have been put forward. However, there are a lot of challenges and problems need to be fixed. In this paper, a few technologies have been concluded and analyzed, and they all relate to handling facial expressions recognition and poses like pose-indexed based multi-view method for face alignment, robust facial landmark detection under significant head pose and occlusion, partitioning the input domain for classification, robust statistics face formalization.
Breuer, Ran; Kimmel, Ron
Facial expressions play a significant role in human communication and behavior. Psychologists have long studied the relationship between facial expressions and emotions. Paul Ekman et al., devised the Facial Action Coding System (FACS) to taxonomize human facial expressions and model their behavior. The ability to recognize facial expressions automatically, enables novel applications in fields like human-computer interaction, social gaming, and psychological research. There has been a tremend...
Yrttiaho, Santeri; Niehaus, Dana; Thomas, Eileen; Leppänen, Jukka M
Human parental care relies heavily on the ability to monitor and respond to a child's affective states. The current study examined pupil diameter as a potential physiological index of mothers' affective response to infant facial expressions. Pupillary time-series were measured from 86 mothers of young infants in response to an array of photographic infant faces falling into four emotive categories based on valence (positive vs. negative) and arousal (mild vs. strong). Pupil dilation was highly sensitive to the valence of facial expressions, being larger for negative vs. positive facial expressions. A separate control experiment with luminance-matched non-face stimuli indicated that the valence effect was specific to facial expressions and cannot be explained by luminance confounds. Pupil response was not sensitive to the arousal level of facial expressions. The results show the feasibility of using pupil diameter as a marker of mothers' affective responses to ecologically valid infant stimuli and point to a particularly prompt maternal response to infant distress cues.
Pochedly, Joseph T; Widen, Sherri C; Russell, James A
The emotion attributed to the prototypical "facial expression of disgust" (a nose scrunch) depended on what facial expressions preceded it. In two studies, the majority of 120 children (5-14 years) and 135 adults (16-58 years) judged the nose scrunch as expressing disgust when the preceding set included an anger scowl, but as angry when the anger scowl was omitted. An even greater proportion of observers judged the nose scrunch as angry when the preceding set also included a facial expression of someone about to be sick. The emotion attributed to the nose scrunch therefore varies with experimental context. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Becker, D Vaughn
The confounded signal hypothesis maintains that facial expressions of anger and happiness, in order to more efficiently communicate threat or nurturance, evolved forms that take advantage of older gender recognition systems, which were already attuned to similar affordances. Two unexplored consequences of this hypothesis are (1) facial gender should automatically interfere with discriminations of anger and happiness, and (2) controlled attentional processes (like working memory) may be able to override the interference of these particular expressions on gender discrimination. These issues were explored by administering a Garner interference task along with a working memory task as an index of controlled attention. Results show that those with good attentional control were able to eliminate interference of expression on gender decisions but not the interference of gender on expression decisions. Trials in which the stimulus attributes were systematically correlated also revealed strategic facilitation for participants high in attentional control. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Kokin, Jessica; Younger, Alastair; Gosselin, Pierre; Vaillancourt, Tracy
The relationship between shyness and the interpretations of the facial expressions of others was examined in a sample of 123 children aged 12 to 14?years. Participants viewed faces displaying happiness, fear, anger, disgust, sadness, surprise, as well as a neutral expression, presented on a computer screen. The children identified each expression…
Oberman, Lindsay M; Winkielman, Piotr; Ramachandran, Vilayanur S
People spontaneously mimic a variety of behaviors, including emotional facial expressions. Embodied cognition theories suggest that mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. If so, blocking facial mimicry should impair recognition of expressions, especially of emotions that are simulated using facial musculature. The current research tested this hypothesis using four expressions (happy, disgust, fear, and sad) and two mimicry-interfering manipulations (1) biting on a pen and (2) chewing gum, as well as two control conditions. Experiment 1 used electromyography over cheek, mouth, and nose regions. The bite manipulation consistently activated assessed muscles, whereas the chew manipulation activated muscles only intermittently. Further, expressing happiness generated most facial action. Experiment 2 found that the bite manipulation interfered most with recognition of happiness. These findings suggest that facial mimicry differentially contributes to recognition of specific facial expressions, thus allowing for more refined predictions from embodied cognition theories.
Full Text Available The authors believe that the consciousness of humans basically originates from languages and their association-like flow of consciousness, and that feelings are generated accompanying respective languages. We incorporated artificial consciousness into a robot; achieved an association flow of language like flow of consciousness; and developed a robot called Kansei that expresses its feelings according to the associations occurring in the robot. To be able to fully communicate with humans, robots must be able to display complex expressions, such as a sense of being thrilled. We therefore added to the Kansei robot a device to express complex feelings through its facial expressions. The Kansei robot is actually an artificial skull made of aluminum, with servomotors built into it. The face is made of relatively soft polyethylene, which is formed to appear like a human face. Facial expressions are generated using 19 servomotors built into the skull, which pull metal wires attached to the facial “skin” to create expressions. The robot at present is capable of making six basic expressions as well as complex expressions, such as happiness and fear combined.
In research it has been demonstrated that cognitive and interpersonal processes play significant roles in depression development and persistence. The judgment of emotions displayed in facial expressions by depressed patients allows for a better understanding of these processes. In this study, 48
Full Text Available Thirty-two videorecorded interviews were conducted by two interviewers with eight patients diagnosed with schizophrenia. Each patient was interviewed four times: three weekly interviews by the first interviewer and one additional interview by the second interviewer. 64 selected sequences where the patients were speaking about psychotic experiences were scored for facial affective behaviour with Emotion Facial Action Coding System (EMFACS. In accordance with previous research, the results show that patients diagnosed with schizophrenia express negative facial affectivity. Facial affective behaviour seems not to be dependent on temporality, since within-subjects ANOVA revealed no substantial changes in the amount of affects displayed across the weekly interview occasions. Whereas previous findings found contempt to be the most frequent affect in patients, in the present material disgust was as common, but depended on the interviewer. The results suggest that facial affectivity in these patients is primarily dominated by the negative emotions of disgust and, to a lesser extent, contempt and implies that this seems to be a fairly stable feature.
Kirsh, Steven J.; Mounts, Jeffrey R. W.; Olczak, Paul V.
This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph.…
McCullough, Stephen; Emmorey, Karen
Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX…
Jia-Shing Sheu; Tsu-Shien Hsieh; Ho-Nien Shou
This paper presents an image deformation algorithm and constructs an automatic facial expression generation system to generate new facial expressions in neutral state. After the users input the face image in a neutral state into the system, the system separates the possible facial areas and the image background by skin color segmentation. It then uses the morphological operation to remove noise and to capture the organs of facial expression, such as the eyes, mouth, eyebrow, and nose. The fea...
Sophie L Fayolle
Full Text Available Two experiments were run to examine the effects of dynamic displays of facial expressions of emotions on time judgments. The participants were given a temporal bisection task with emotional facial expressions presented in a dynamic or a static display. Two emotional facial expressions and a neutral expression were tested and compared. Each of the emotional expressions had the same affective valence (unpleasant, but one was high-arousing (expressing anger and the other low-arousing (expressing sadness. Our results showed that time judgments are highly sensitive to movements in facial expressions and the emotions expressed. Indeed, longer perceived durations were found in response to the dynamic faces and the high-arousing emotional expressions compared to the static faces and low-arousing expressions. In addition, the facial movements amplified the effect of emotions on time perception. Dynamic facial expressions are thus interesting tools for examining variations in temporal judgments in different social contexts.
Matsumoto, David; Hwang, Hyisung C
Most studies on judgments of facial expressions of emotion have primarily utilized prototypical, high-intensity expressions. This paper examines judgments of subtle facial expressions of emotion, including not only low-intensity versions of full-face prototypes but also variants of those prototypes. A dynamic paradigm was used in which observers were shown a neutral expression followed by the target expression to judge, and then the neutral expression again, allowing for a simulation of the emergence of the expression from and then return to a baseline. We also examined how signal and intensity clarities of the expressions (explained more fully in the Introduction) were associated with judgment agreement levels. Low-intensity, full-face prototypical expressions of emotion were judged as the intended emotion at rates significantly greater than chance. A number of the proposed variants were also judged as the intended emotions. Both signal and intensity clarities were individually associated with agreement rates; when their interrelationships were taken into account, signal clarity independently predicted agreement rates but intensity clarity did not. The presence or absence of specific muscles appeared to be more important to agreement rates than their intensity levels, with the exception of the intensity of zygomatic major, which was positively correlated with agreement rates for judgments of joy.
Xue, Henry; Gertner, Izidor
In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).
Leathers, Dale G.; Emigh, Ted H.
Describes the development and testing of a new facial meaning sensitivity test designed to determine how specialized are the meanings that can be decoded from facial expressions. Demonstrates the use of the test to measure a receiver's current level of skill in decoding facial expressions. (JMF)
Valstar, M.F.; Mehu, M.; Jiang, Bihan; Pantic, Maja; Scherer, K.
Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability
Wied, de M.; Boxtel, van Anton; Zaalberg, R.; Goudena, P.P.; Matthys, W.
Based on the assumption that facial mimicry is a key factor in emotional empathy, and clinical observations that children with disruptive behavior disorders (DBD) are weak empathizers, the present study explored whether DBD boys are less facially responsive to facial expressions of emotions than
Reschke, Peter J; Walle, Eric A; Knothe, Jennifer M; Lopez, Lukas D
Face perception is susceptible to contextual influence and perceived physical similarities between emotion cues. However, studies often use structurally homogeneous facial expressions, making it difficult to explore how within-emotion variability in facial configuration affects emotion perception. This study examined the influence of context on the emotional perception of categorically identical, yet physically distinct, facial expressions of disgust. Participants categorized two perceptually distinct disgust facial expressions, "closed" (i.e., scrunched nose, closed mouth) and "open" (i.e., scrunched nose, open mouth, protruding tongue), that were embedded in contexts comprising emotion postures and scenes. Results demonstrated that the effect of nonfacial elements was significantly stronger for "open" disgust facial expressions than "closed" disgust facial expressions. These findings provide support that physical similarity within discrete categories of facial expressions is mutable and plays an important role in affective face perception. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Li, Ming; Wang, Zengfu
Recently, dynamic facial expression recognition in videos has attracted growing attention. In this paper, we propose a novel dynamic facial expression recognition method by using geometric and texture features. In our system, the facial landmark movements and texture variations upon pairwise images are used to perform the dynamic facial expression recognition tasks. For one facial expression sequence, pairwise images are created between the first frame and each of its subsequent frames. Integration of both geometric and texture features further enhances the representation of the facial expressions. Finally, Support Vector Machine is used for facial expression recognition. Experiments conducted on the extended Cohn-Kanade database show that our proposed method can achieve a competitive performance with other methods.
Alagha, M A; Ju, X; Morley, S; Ayoub, A
The aim of this study was to assess the reproducibility of non-verbal facial expressions in unilateral facial paralysis using dynamic four-dimensional (4D) imaging. The Di4D system was used to record five facial expressions of 20 adult patients. The system captured 60 three-dimensional (3D) images per second; each facial expression took 3-4seconds which was recorded in real time. Thus a set of 180 3D facial images was generated for each expression. The procedure was repeated after 30min to assess the reproducibility of the expressions. A mathematical facial mesh consisting of thousands of quasi-point 'vertices' was conformed to the face in order to determine the morphological characteristics in a comprehensive manner. The vertices were tracked throughout the sequence of the 180 images. Five key 3D facial frames from each sequence of images were analyzed. Comparisons were made between the first and second capture of each facial expression to assess the reproducibility of facial movements. Corresponding images were aligned using partial Procrustes analysis, and the root mean square distance between them was calculated and analyzed statistically (paired Student t-test, PFacial expressions of lip purse, cheek puff, and raising of eyebrows were reproducible. Facial expressions of maximum smile and forceful eye closure were not reproducible. The limited coordination of various groups of facial muscles contributed to the lack of reproducibility of these facial expressions. 4D imaging is a useful clinical tool for the assessment of facial expressions. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Full Text Available Detecting others’ emotional states from their faces is an essential component of successful social interaction. However, the ability to perceive emotional expressions is reported to be modulated by a number of factors. We have previously found that facial color modulates the judgment of facial expression, while another study has shown that background color plays a modulatory role. Therefore, in this study, we directly compared the effects of face and background color on facial expression judgment within a single experiment. Fear-to-anger morphed faces were presented in face and background color conditions. Our results showed that judgments of facial expressions was influenced by both face and background color. However, facial color effects were significantly greater than background color effects, although the color saturation of faces was lower compared to background colors. These results suggest that facial color is intimately related to the judgment of facial expression, over and above the influence of simple color.
Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin
Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Butler, Andrea; Oruc, Ipek; Fox, Christopher J; Barton, Jason J S
Previous studies have demonstrated the existence of adaptation aftereffects for facial expressions. Here we investigated which aspects of facial stimuli contribute to these aftereffects. In Experiment 1, we examined the role of local adaptation to image elements such as curvature, shape and orientation, independent of expression, by using hybrid faces constructed from either the same or opposing expressions. While hybrid faces made with consistent expressions generated aftereffects as large as those with normal faces, there were no aftereffects from hybrid faces made from different expressions, despite the fact that these contained the same local image elements. In Experiment 2, we examined the role of facial features independent of the normal face configuration by contrasting adaptation with whole faces to adaptation with scrambled faces. We found that scrambled faces also generated significant aftereffects, indicating that expressive features without a normal facial configuration could generate expression aftereffects. In Experiment 3, we examined the role of facial configuration by using schematic faces made from line elements that in isolation do not carry expression-related information (e.g. curved segments and straight lines) but that convey an expression when arranged in a normal facial configuration. We obtained a significant aftereffect for facial configurations but not scrambled configurations of these line elements. We conclude that facial expression aftereffects are not due to local adaptation to image elements but due to high-level adaptation of neural representations that involve both facial features and facial configuration.
Full Text Available This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”
Full Text Available Background: Developing valid emotional facial stimuli for specific ethnicities creates ample opportunities to investigate both the nature of emotional facial information processing in general and clinical populations as well as the underlying mechanisms of facial emotion processing within and across cultures. Given that most entries in emotional facial stimuli databases were developed with western samples, and given that very few of the eastern emotional facial stimuli sets were based strictly on the Ekman’s Facial Action Coding System, developing valid emotional facial stimuli of eastern samples remains a high priority.Aims: To develop and examine the psychometric properties of six basic emotional facial stimuli recruiting professional Korean actors and actresses based on the Ekman’s Facial Action Coding System for the Korea University Facial Expression Collection-Second Edition (KUFEC-II.Materials And Methods: Stimulus selection was done in two phases. First, researchers evaluated the clarity and intensity of each stimulus developed based on the Facial Action Coding System. Second, researchers selected a total of 399 stimuli from a total of 57 actors and actresses, which were then rated on accuracy, intensity, valence, and arousal by 75 independent raters.Conclusion: The hit rates between the targeted and rated expressions of the KUFEC-II were all above 80%, except for fear (50% and disgust (63%. The KUFEC-II appears to be a valid emotional facial stimuli database, providing the largest set of emotional facial stimuli. The mean intensity score was 5.63 (out of 7, suggesting that the stimuli delivered the targeted emotions with great intensity. All positive expressions were rated as having a high positive valence, whereas all negative expressions were rated as having a high negative valence. The KUFEC II is expected to be widely used in various psychological studies on emotional facial expression. KUFEC-II stimuli can be obtained through
This paper presents an investigation of mirroring facial expressions and the emotions which they convey in dyadic naturally occurring first encounters. Mirroring facial expressions are a common phenomenon in face-to-face interactions, and they are due to the mirror neuron system which has been...... and overlapping facial expressions are very frequent. In this study, we want to determine whether the overlapping facial expressions are mirrored or are otherwise correlated in the encounters, and to what extent mirroring facial expressions convey the same emotion. The results of our study show that the majority...... of smiles and laughs, and one fifth of the occurrences of raised eyebrows are mirrored in the data. Moreover some facial traits in co-occurring expressions co-occur more often than it would be expected by chance. Finally, amusement, and to a lesser extent friendliness, are often emotions shared by both...
Wood, Adrienne; Lupyan, Gary; Sherrin, Steven; Niedenthal, Paula
Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.
Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei
Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the
Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno
This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…
Mori, Hiroki; Ohshima, Koh
A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.
Reed, Lawrence Ian; DeScioli, Peter
What are the communicative functions of sad facial expressions? Research shows that people feel sadness in response to losses but it's unclear whether sad expressions function to communicate losses to others and if so, what makes these signals credible. Here we use economic games to test the hypothesis that sad expressions lend credibility to claims of loss. Participants play the role of either a proposer or recipient in a game with a fictional backstory and real monetary payoffs. The proposers view a (fictional) video of the recipient's character displaying either a neutral or sad expression paired with a claim of loss. The proposer then decided how much money to give to the recipient. In three experiments, we test alternative theories by using situations in which the recipient's losses were uncertain (Experiment 1), the recipient's losses were certain (Experiment 2), or the recipient claims failed gains rather than losses (Experiment 3). Overall, we find that participants gave more money to recipients who displayed sad expressions compared to neutral expressions, but only under conditions of uncertain loss. This finding supports the hypothesis that sad expressions function to increase the credibility of claims of loss.
Lawrence Ian Reed
Full Text Available What are the communicative functions of sad facial expressions? Research shows that people feel sadness in response to losses but it’s unclear whether sad expressions function to communicate losses to others and if so, what makes these signals credible. Here we use economic games to test the hypothesis that sad expressions lend credibility to claims of loss. Participants play the role of either a proposer or recipient in a game with a fictional backstory and real monetary payoffs. The proposers view a (fictional video of the recipient’s character displaying either a neutral or sad expression paired with a claim of loss. The proposer then decided how much money to give to the recipient. In three experiments, we test alternative theories by using situations in which the recipient’s losses were uncertain (Experiment 1, the recipient’s losses were certain (Experiment 2, or the recipient claims failed gains rather than losses (Experiment 3. Overall, we find that participants gave more money to recipients who displayed sad expressions compared to neutral expressions, but only under conditions of uncertain loss. This finding supports the hypothesis that sad expressions function to increase the credibility of claims of loss.
Montagne, B.; Kessels, R.P.C.; Wester, A.J.; Haan, E.H.F. de
Interpersonal contacts depend to a large extent on understanding emotional facial expressions of others. Several neurological conditions may affect proficiency in emotional expression recognition. It has been shown that chronic alcoholics are impaired in labelling emotional expressions. More
Zhi, Ruicong; Cao, Lianyu; Cao, Gang
Growing evidence shows that consumer choices in real life are mostly driven by unconscious mechanisms rather than conscious. The unconscious process could be measured by behavioral measurements. This study aims to apply automatic facial expression analysis technique for consumers' emotion representation, and explore the relationships between sensory perception and facial responses. Basic taste solutions (sourness, sweetness, bitterness, umami, and saltiness) with 6 levels plus water were used, which could cover most of the tastes found in food and drink. The other contribution of this study is to analyze the characteristics of facial expressions and correlation between facial expressions and perceptive hedonic liking for Asian consumers. Up until now, the facial expression application researches only reported for western consumers, while few related researches investigated the facial responses during food consuming for Asian consumers. Experimental results indicated that facial expressions could identify different stimuli with various concentrations and different hedonic levels. The perceived liking increased at lower concentrations and decreased at higher concentrations, while samples with medium concentrations were perceived as the most pleasant except sweetness and bitterness. High correlations were founded between perceived intensities of bitterness, umami, saltiness, and facial reactions of disgust and fear. Facial expression disgust and anger could characterize emotion "dislike," and happiness could characterize emotion "like," while neutral could represent "neither like nor dislike." The identified facial expressions agree with the perceived sensory emotions elicited by basic taste solutions. The correlation analysis between hedonic levels and facial expression intensities obtained in this study are in accordance with that discussed for western consumers. © 2017 Institute of Food Technologists®.
Cai, Mang; Shen, Shengnan; Li, Hui
This study investigated the effect of four typical facial expressions (calmness, happiness, sadness and surprise) on contact characteristics between an N95 filtering facepiece respirator and a headform. The respirator model comprised two layers (an inner layer and an outer layer) and a nose clip. The headform model was comprised of a skin layer, a fatty tissue layer embedded with eight muscles, and a skull layer. Four typical facial expressions were generated by the coordinated contraction of four facial muscles. After that, the distribution of the contact pressure on the headform, as well as the contact area, were calculated. Results demonstrated that the nasal clip could help make the respirator move closer to the nose bridge while causing facial discomfort. Moreover, contact areas varied with different facial expressions, and facial expressions significantly altered contact pressures at different key areas, which may result in leakage.
Full Text Available BACKGROUND: A Noh mask, worn by expert actors during performance on the Japanese traditional Noh drama, conveys various emotional expressions despite its fixed physical properties. How does the mask change its expressions? Shadows change subtly during the actual Noh drama, which plays a key role in creating elusive artistic enchantment. We here describe evidence from two experiments regarding how attached shadows of the Noh masks influence the observers' recognition of the emotional expressions. METHODOLOGY/PRINCIPAL FINDINGS: In Experiment 1, neutral-faced Noh masks having the attached shadows of the happy/sad masks were recognized as bearing happy/sad expressions, respectively. This was true for all four types of masks each of which represented a character differing in sex and age, even though the original characteristics of the masks also greatly influenced the evaluation of emotions. Experiment 2 further revealed that frontal Noh mask images having shadows of upward/downward tilted masks were evaluated as sad/happy, respectively. This was consistent with outcomes from preceding studies using actually tilted Noh mask images. CONCLUSIONS/SIGNIFICANCE: Results from the two experiments concur that purely manipulating attached shadows of the different types of Noh masks significantly alters the emotion recognition. These findings go in line with the mysterious facial expressions observed in Western paintings, such as the elusive qualities of Mona Lisa's smile. They also agree with the aesthetic principle of Japanese traditional art "yugen (profound grace and subtlety", which highly appreciates subtle emotional expressions in the darkness.
Bennett, David S; Bendersky, Margaret; Lewis, Michael
The specificity predicted by differential emotions theory (DET) for early facial expressions in response to 5 different eliciting situations was studied in a sample of 4-month-old infants (n = 150). Infants were videotaped during tickle, sour taste, jack-in-the-box, arm restraint, and masked-stranger situations and their expressions were coded second by second. Infants showed a variety of facial expressions in each situation; however, more infants exhibited positive (joy and surprise) than negative expressions (anger, disgust, fear, and sadness) across all situations except sour taste. Consistent with DET-predicted specificity, joy expressions were the most common in response to tickling, and were less common in response to other situations. Surprise expressions were the most common in response to the jack-in-the-box, as predicted, but also were the most common in response to the arm restraint and masked-stranger situations, indicating a lack of specificity. No evidence of predicted specificity was found for anger, disgust, fear, and sadness expressions. Evidence of individual differences in expressivity within situations, as well as stability in the pattern across situations, underscores the need to examine both child and contextual factors in studying emotional development. The results provide little support for the DET postulate of situational specificity and suggest that a synthesis of differential emotions and dynamic systems theories of emotional expression should be considered.
Yoshimura, Sayaka; Sato, Wataru; Uono, Shota; Toichi, Motomi
Previous electromyographic studies have reported that individuals with autism spectrum disorders (ASD) exhibited atypical patterns of facial muscle activity in response to facial expression stimuli. However, whether such activity is expressed in visible facial mimicry remains unknown. To investigate this issue, we videotaped facial responses in…
Ilyas, Chaudhary Muhammad Aqdus; Nasrollahi, Kamal; Moeslund, Thomas B.
In this paper, we investigate the issues associated with facial expression recognition of Traumatic Brain Insured (TBI) patients in a realistic scenario. These patients have restricted or limited muscle movements with reduced facial expressions along with non-cooperative behavior, impaired reason...
Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja
We present a regression-based scheme for multi-view facial expression recognition based on 2蚠D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a
Liu, Tongran; Xiao, Tong; Li, Xiaoyan
The current study investigated the relationship between general intelligence and the three stages of facial expression processing. Two groups of adolescents with different levels of general intelligence were required to identify three types of facial expressions (happy, sad, and neutral faces...
In this paper we discuss the aspects of designing facial expressions for Virtual Humans with a specific culture. First we explore the notion of cultures and its relevance for applications with a Virtual Human. Then we give a general scheme of designing emotional facial expressions, and identify the
Full Text Available Emotional processing without conscious awareness plays an important role in human social interaction. Several behavioral studies reported that subliminal presentation of photographs of emotional facial expressions induces unconscious emotional processing. However, it was difficult to elicit strong and robust effects using this method. We hypothesized that dynamic presentations of facial expressions would enhance subliminal emotional effects and tested this hypothesis with two experiments. Fearful or happy facial expressions were presented dynamically or statically in either the left or the right visual field for 20 (Experiment 1 and 30 (Experiment 2 ms. Nonsense target ideographs were then presented, and participants reported their preference for them. The results consistently showed that dynamic presentations of emotional facial expressions induced more evident emotional biases toward subsequent targets than did static ones. These results indicate that dynamic presentations of emotional facial expressions induce more evident unconscious emotional processing.
Gandolphe, Marie Charlotte; Nandrino, Jean Louis; Delelis, Gérald; Ducro, Claire; Lavallee, Audrey; Saloppe, Xavier; Moustafa, Ahmed A; El Haj, Mohamad
In this study, we investigated, for the first time, facial expressions during the retrieval of Self-defining memories (i.e., those vivid and emotionally intense memories of enduring concerns or unresolved conflicts). Participants self-rated the emotional valence of their Self-defining memories and autobiographical retrieval was analyzed with a facial analysis software. This software (Facereader) synthesizes the facial expression information (i.e., cheek, lips, muscles, eyebrow muscles) to describe and categorize facial expressions (i.e., neutral, happy, sad, surprised, angry, scared, and disgusted facial expressions). We found that participants showed more emotional than neutral facial expressions during the retrieval of Self-defining memories. We also found that participants showed more positive than negative facial expressions during the retrieval of Self-defining memories. Interestingly, participants attributed positive valence to the retrieved memories. These findings are the first to demonstrate the consistency between facial expressions and the emotional subjective experience of Self-defining memories. These findings provide valuable physiological information about the emotional experience of the past.
Jannah, M.; Zarlis, M.; Mawengkang, H.
Facial expressions recognition is one of interesting research. This research contains human feeling to computer application Such as the interaction between human and computer, data compression, facial animation and facial detection from the video. The purpose of this research is to create facial expression system that captures image from the video camera. The system in this research uses Widrow-Hoff learning method in training and testing image with Adaptive Linear Neuron (ADALINE) approach. The system performance is evaluated by two parameters, detection rate and false positive rate. The system accuracy depends on good technique and face position that trained and tested.
Fisher, Katie; Towler, John; Eimer, Martin
It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently. Copyright © 2015 Elsevier Ltd. All rights reserved.
van Dillen, L.F.; Harris, L.T.; van Dijk, W.W.; Rotteveel, M.
In the present research we examined whether the psychological meaning of people's categorisation goals affects facial muscle activity in response to facial expressions of emotion. We had participants associate eye colour (blue, brown) with either a personality trait (extraversion) or a physical
Christinaki, Eirini; Vidakis, Nikolaos; Triantafyllidis, Georgios
The recognition of facial expressions is important for the perception of emotions. Understanding emotions is essential in human communication and social interaction. Children with autism have been reported to exhibit deficits in the recognition of affective expressions. Their difficulties...
Jack, Rachael E; Sun, Wei; Delis, Ioannis; Garrod, Oliver G B; Schyns, Philippe G
As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Kawai, Nobuyuki; Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo
Background A Noh mask, worn by expert actors during performance on the Japanese traditional Noh drama, conveys various emotional expressions despite its fixed physical properties. How does the mask change its expressions? Shadows change subtly during the actual Noh drama, which plays a key role in creating elusive artistic enchantment. We here describe evidence from two experiments regarding how attached shadows of the Noh masks influence the observers’ recognition of the emotional expressions. Methodology/Principal Findings In Experiment 1, neutral-faced Noh masks having the attached shadows of the happy/sad masks were recognized as bearing happy/sad expressions, respectively. This was true for all four types of masks each of which represented a character differing in sex and age, even though the original characteristics of the masks also greatly influenced the evaluation of emotions. Experiment 2 further revealed that frontal Noh mask images having shadows of upward/downward tilted masks were evaluated as sad/happy, respectively. This was consistent with outcomes from preceding studies using actually tilted Noh mask images. Conclusions/Significance Results from the two experiments concur that purely manipulating attached shadows of the different types of Noh masks significantly alters the emotion recognition. These findings go in line with the mysterious facial expressions observed in Western paintings, such as the elusive qualities of Mona Lisa’s smile. They also agree with the aesthetic principle of Japanese traditional art “yugen (profound grace and subtlety)”, which highly appreciates subtle emotional expressions in the darkness. PMID:23940748
Full Text Available This study is to investigate human perceptions on pairing of facial expressions of emotion with colours. A group of 27 subjects consisting mainly of younger and Malaysian had participated in this study. For each of the seven faces, which expresses the basic emotions neutral, happiness, surprise, anger, disgust, fear and sadness, a single colour is chosen from the eight basic colours for the match of best visual look to the face accordingly. The different emotions appear well characterized by a single colour. The approaches used in this experiment for analysis are psychology disciplines and colours engineering. These seven emotions are being matched by the subjects with their perceptions and feeling. Then, 12 male and 12 female data are randomly chosen from among the previous data to make a colour perception comparison between genders. The successes or failures in running of this test depend on the possibility of subjects to propose their every single colour for each expression. The result will translate into number and percentage as a guide for colours designers and psychology field.
Benitez-Quiroz, C Fabian; Wilbur, Ronnie B; Martinez, Aleix M
Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3-8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers. Copyright © 2016 Elsevier B.V. All rights reserved.
Kraines, Morganne A; Kelberer, Lucas J A; Wells, Tony T
Research demonstrates that women experience disgust more readily and with more intensity than men. The experience of disgust is associated with increased attention to disgust-related stimuli, but no prior study has examined sex differences in attention to disgust facial expressions. We hypothesised that women, compared to men, would demonstrate increased attention to disgust facial expressions. Participants (n = 172) completed an eye tracking task to measure visual attention to emotional facial expressions. Results indicated that women spent more time attending to disgust facial expressions compared to men. Unexpectedly, we found that men spent significantly more time attending to neutral faces compared to women. The findings indicate that women's increased experience of emotional disgust also extends to attention to disgust facial stimuli. These findings may help to explain sex differences in the experience of disgust and in diagnoses of anxiety disorders in which disgust plays an important role.
Bonanno, G A; Keltner, D
The common assumption that emotional expression mediates the course of bereavement is tested. Competing hypotheses about the direction of mediation were formulated from the grief work and social-functional accounts of emotional expression. Facial expressions of emotion in conjugally bereaved adults were coded at 6 months post-loss as they described their relationship with the deceased; grief and perceived health were measured at 6, 14, and 25 months. Facial expressions of negative emotion, in particular anger, predicted increased grief at 14 months and poorer perceived health through 25 months. Facial expressions of positive emotion predicted decreased grief through 25 months and a positive but nonsignificant relation to perceived health. Predictive relations between negative and positive emotional expression persisted when initial levels of self-reported emotion, grief, and health were statistically controlled, demonstrating the mediating role of facial expressions of emotion in adjustment to conjugal loss. Theoretical and clinical implications are discussed.
Full Text Available Facial expression recognition have a wide range of applications in human-machine interaction, pattern recognition, image understanding, machine vision and other fields. Recent years, it has gradually become a hot research. However, different people have different ways of expressing their emotions, and under the influence of brightness, background and other factors, there are some difficulties in facial expression recognition. In this paper, based on the Inception-v3 model of TensorFlow platform, we use the transfer learning techniques to retrain facial expression dataset (The Extended Cohn-Kanade dataset, which can keep the accuracy of recognition and greatly reduce the training time.
Wu, Yao; Qiu, Weigen
In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.
Hess, Ursula; Adams, Reginald B; Kleck, Robert E
Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.
Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia
Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Full Text Available The capacity of visual working memory is limited to no more than four items. At the same time, it is limited not only by the number of objects, but also by the total amount of information that needs to be memorized, and the relation between the information load per object and the number of objects that can be stored into visual working memory is inverse. The objective of the present experiment was to compute visual working memory capacity for emotional facial expressions, and in order to do so, change detection tasks were applied. Pictures of human emotional facial expressions were presented to 24 participants in 1008 experimental trials, each of which began with a presentation of a fixation mark, which was followed by a short simultaneous presentation of six emotional facial expressions. After that, a blank screen was presented, and after such inter-stimulus interval, one facial expression was presented at one of previously occupied locations. Participants had to answer if the facial expression presented at test is different or identical as the expression presented at that same location before the retention interval. Memory capacity was estimated through accuracy of responding, by the formula constructed by Pashler (1988, adopted from signal detection theory. It was found that visual working memory capacity for emotional facial expressions equals 3.07, which is high compared to capacity for facial identities and other visual stimuli. The obtained results were explained within the framework of evolutionary psychology.
Full Text Available We report the development of two simple, objective, psychophysical measures of the ability to discriminate facial expressions of emotion that vary in intensity from a neutral facial expression and to discriminate between varying intensities of emotional facial expression. The stimuli were created by morphing photographs of models expressing four basic emotions, anger, disgust, happiness and sadness with neutral expressions. Psychometric functions were obtained for 15 healthy young adults using the Method of Constant Stimuli with a two-interval forced-choice procedure. Individual data points were fitted by Quick functions for each task and each emotion, allowing estimates of absolute thresholds and slopes. The tasks give objective and sensitive measures of the basic perceptual abilities required for perceiving and interpreting emotional facial expressions.
Lawlor-Savage, Linette; Sponheim, Scott R; Goghari, Vina M
The ability to accurately judge facial expressions is important in social interactions. Individuals with bipolar disorder have been found to be impaired in emotion recognition; however, the specifics of the impairment are unclear. This study investigated whether facial emotion recognition difficulties in bipolar disorder reflect general cognitive, or emotion-specific, impairments. Impairment in the recognition of particular emotions and the role of processing speed in facial emotion recognition were also investigated. Clinically stable bipolar patients (n = 17) and healthy controls (n = 50) judged five facial expressions in two presentation types, time-limited and self-paced. An age recognition condition was used as an experimental control. Bipolar patients' overall facial recognition ability was unimpaired. However, patients' specific ability to judge happy expressions under time constraints was impaired. Findings suggest a deficit in happy emotion recognition impacted by processing speed. Given the limited sample size, further investigation with a larger patient sample is warranted.
aan het Rot, Marije; Hogenelst, Koen; Gesing, Christina M
Facial emotions are important for human communication. Unfortunately, traditional facial emotion recognition tasks do not inform about how respondents might behave towards others expressing certain emotions. Approach-avoidance tasks do measure behaviour, but only on one dimension. In this study 81
Lautenbacher, Stefan; Baer, Karl-Juergen; Eisold, Patricia; Kunz, Miriam
Although depression is associated with more clinical pain complaints, psychophysical data sometimes point to hypoalgesic alterations. Studying the more reflex-like facial expression of pain in patients with depression may offer a new perspective. Facial and psychophysical responses to nonpainful and
Kuniyada, Jun; Fukuda, Takahiro; Terada, Kenji
By carrying out marketing research, the managers of large-sized department stores or small convenience stores obtain the information such as ratio of men and women of visitors and an age group, and improve their management plan. However, these works are carried out in the manual operations, and it becomes a big burden to small stores. In this paper, the authors propose a method of men and women discrimination by extracting difference of the facial expression change from color facial images. Now, there are a lot of methods of the automatic recognition of the individual using a motion facial image or a still facial image in the field of image processing. However, it is very difficult to discriminate gender under the influence of the hairstyle and clothes, etc. Therefore, we propose the method which is not affected by personality such as size and position of facial parts by paying attention to a change of an expression. In this method, it is necessary to obtain two facial images with an expression and an expressionless. First, a region of facial surface and the regions of facial parts such as eyes, nose, and mouth are extracted in the facial image with color information of hue and saturation in HSV color system and emphasized edge information. Next, the features are extracted by calculating the rate of the change of each facial part generated by an expression change. In the last step, the values of those features are compared between the input data and the database, and the gender is discriminated. In this paper, it experimented for the laughing expression and smile expression, and good results were provided for discriminating gender.
Full Text Available To improve the human-computer interaction (HCI to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness, with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.
Full Text Available Animating expressive facial animation is a very challenging topic within the graphics community. In this paper, we introduce a novel ERI (expression ratio image driving framework based on SVR and MPEG-4 for automatic 3D facial expression animation. Through using the method of support vector regression (SVR, the framework can learn and forecast the regression relationship between the facial animation parameters (FAPs and the parameters of expression ratio image. Firstly, we build a 3D face animation system driven by FAP. Secondly, through using the method of principle component analysis (PCA, we generate the parameter sets of eigen-ERI space, which will rebuild reasonable expression ratio image. Then we learn a model with the support vector regression mapping, and facial animation parameters can be synthesized quickly with the parameters of eigen-ERI. Finally, we implement our 3D face animation system driving by the result of FAP and it works effectively.
Li, Bingbing; Cheng, Gang; Zhang, Dajun; Wei, Dongtao; Qiao, Lei; Wang, Xiangpeng; Che, Xianwei
Recent neuroimaging studies suggest that neutral infant faces compared to neutral adult faces elicit greater activity in brain areas associated with face processing, attention, empathic response, reward, and movement. However, whether infant facial expressions evoke larger brain responses than adult facial expressions remains unclear. Here, we performed event-related functional magnetic resonance imaging in nulliparous women while they were presented with images of matched unfamiliar infant and adult facial expressions (happy, neutral, and uncomfortable/sad) in a pseudo-randomized order. We found that the bilateral fusiform and right lingual gyrus were overall more activated during the presentation of infant facial expressions compared to adult facial expressions. Uncomfortable infant faces compared to sad adult faces evoked greater activation in the bilateral fusiform gyrus, precentral gyrus, postcentral gyrus, posterior cingulate cortex-thalamus, and precuneus. Neutral infant faces activated larger brain responses in the left fusiform gyrus compared to neutral adult faces. Happy infant faces compared to happy adult faces elicited larger responses in areas of the brain associated with emotion and reward processing using a more liberal threshold of p facial expressions compared to adult facial expressions among nulliparous women, and this bias may be modulated by individual differences in Interest-In-Infants and perspective taking ability.
Vida, Mark D.; Mondloch, Catherine J.
This investigation used adaptation aftereffects to examine developmental changes in the perception of facial expressions. Previous studies have shown that adults' perceptions of ambiguous facial expressions are biased following adaptation to intense expressions. These expression aftereffects are strong when the adapting and probe expressions share…
Waller, Bridget M; Whitehouse, Jamie; Micheletta, Jérôme
There is widespread acceptance that facial expressions are useful in social interactions, but empirical demonstration of their adaptive function has remained elusive. Here, we investigated whether macaques can use the facial expressions of others to predict the future outcomes of social interaction. Crested macaques (Macaca nigra) were shown an approach between two unknown individuals on a touchscreen and were required to choose between one of two potential social outcomes. The facial expressions of the actors were manipulated in the last frame of the video. One subject reached the experimental stage and accurately predicted different social outcomes depending on which facial expressions the actors displayed. The bared-teeth display (homologue of the human smile) was most strongly associated with predicted friendly outcomes. Contrary to our predictions, screams and threat faces were not associated more with conflict outcomes. Overall, therefore, the presence of any facial expression (compared to neutral) caused the subject to choose friendly outcomes more than negative outcomes. Facial expression in general, therefore, indicated a reduced likelihood of social conflict. The findings dispute traditional theories that view expressions only as indicators of present emotion and instead suggest that expressions form part of complex social interactions where individuals think beyond the present.
Full Text Available Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on cognitive penetration, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept cognitive penetration in some cases of emotion recognition. Finally, we highlight a recent model of social vision in order to propose a mechanism for cognitive penetration used in the face-based recognition of emotion.
Smith, M C; Smith, M K; Ellgring, H
Spontaneous and posed emotional facial expressions in individuals with Parkinson's disease (PD, n = 12) were compared with those of healthy age-matched controls (n = 12). The intensity and amount of facial expression in PD patients were expected to be reduced for spontaneous but not posed expressions. Emotional stimuli were video clips selected from films, 2-5 min in duration, designed to elicit feelings of happiness, sadness, fear, disgust, or anger. Facial movements were coded using Ekman and Friesen's (1978) Facial Action Coding System (FACS). In addition, participants rated their emotional experience on 9-point Likert scales. The PD group showed significantly less overall facial reactivity than did controls when viewing the films. The predicted Group X Condition (spontaneous vs. posed) interaction effect on smile intensity was found when PD participants with more severe disease were compared with those with milder disease and with controls. In contrast, ratings of emotional experience were similar for both groups. Depression was positively associated with emotion rating but not with measures of facial activity. Spontaneous facial expression appears to be selectively affected in PD, whereas posed expression and emotional experience remain relatively intact.
Emotions are becoming increasingly important in human-centered interaction architectures. Recognition of facial expressions, which are central to human-computer interactions, seems natural and desirable. However, facial expressions include mixed emotions, continuous rather than discrete, which vary from moment to moment. This paper represents a novel method of recognizing facial expressions of various internal states via manifold learning, to achieve the aim of humancentered interaction studies. A critical review of widely used emotion models is described, then, facial expression features of various internal states via the locally linear embedding (LLE) are extracted. The recognition of facial expressions is created with the pleasure-displeasure and arousal-sleep dimensions in a two-dimensional model of emotion. The recognition result of various internal state expressions that mapped to the embedding space via the LLE algorithm can effectively represent the structural nature of the two-dimensional model of emotion. Therefore our research has established that the relationship between facial expressions of various internal states can be elaborated in the two-dimensional model of emotion, via the locally linear embedding algorithm.
Full Text Available In this study, we investigated the labeling of facial expressions in French-speaking children. The participants were 137 French-speaking children, between the ages of 5 and 11 years, recruited from three elementary schools in Ottawa, Ontario, Canada. The facial expressions included expressions of happiness, sadness, fear, surprise, anger, and disgust. Participants were shown one facial expression at a time, and asked to say what the stimulus person was feeling. Participants’ responses were coded by two raters who made judgments concerning the specific emotion category in which the responses belonged. Five- and 6-year-olds were quite accurate in labeling facial expressions of happiness, anger, and sadness but far less accurate for facial expressions of fear, surprise, and disgust. An improvement in accuracy as a function of age was found for fear and surprise only. Labeling facial expressions of disgust proved to be very difficult for the children, even for the 11-year-olds. In order to examine the fit between the model proposed by Widen and Russell (2003 and our data, we looked at the number of participants who had the predicted response patterns. Overall, 88.52% of the participants did. Most of the participants used between 3 and 5 labels, with correspondence percentages varying between 80.00% and 100.00%. Our results suggest that the model proposed by Widen and Russell is not limited to English-speaking children, but also accounts for the sequence of emotion labeling in French-Canadian children.
The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions
The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions.
Krebs, Julia F.; Biswas, Ajanta; Pascalis, Olivier; Kamp-Becker, Inge; Remschmidt, Helmuth; Schwarzer, Gudrun
The current study investigated if deficits in processing emotional expression affect facial identity processing and vice versa in children with autism spectrum disorder. Children with autism and IQ and age matched typically developing children classified faces either by emotional expression, thereby ignoring facial identity or by facial identity…
Shioiri, T; Someya, T; Helmeste, D; Tang, S W
Accurately recognizing facial emotional expressions is important in psychiatrist-versus-patient interactions. This might be difficult when the physician and patients are from different cultures. More than two decades of research on facial expressions have documented the universality of the emotions of anger, contempt, disgust, fear, happiness, sadness, and surprise. In contrast, some research data supported the concept that there are significant cultural differences in the judgment of emotion. In this pilot study, the recognition of emotional facial expressions in 123 Japanese subjects was evaluated using the Japanese and Caucasian Facial Expression of Emotion (JACFEE) photos. The results indicated that Japanese subjects experienced difficulties in recognizing some emotional facial expressions and misunderstood others as depicted by the posers, when compared to previous studies using American subjects. Interestingly, the sex and cultural background of the poser did not appear to influence the accuracy of recognition. The data suggest that in this young Japanese sample, judgment of certain emotional facial expressions was significantly different from the Americans. Further exploration in this area is warranted due to its importance in cross-cultural clinician-patient interactions.
Neely, John Gail; Lisker, Paul; Drapekin, Jesse
The objective of this study was to evaluate laterality and upper/lower face dominance of expressiveness during prescribed speech using a unique validated image subtraction system capable of sensitive and reliable measurement of facial surface deformation. Observations and experiments of central control of facial expressions during speech and social utterances in humans and animals suggest that the right mouth moves more than the left during nonemotional speech. However, proficient lip readers seem to attend to the whole face to interpret meaning from expressed facial cues, also implicating a horizontal (upper face-lower face) axis. Prospective experimental design. Experimental maneuver: recited speech. image-subtraction strength-duration curve amplitude. Thirty normal human adults were evaluated during memorized nonemotional recitation of 2 short sentences. Facial movements were assessed using a video-image subtractions system capable of simultaneously measuring upper and lower specific areas of each hemiface. The results demonstrate both axes influence facial expressiveness in human communication; however, the horizontal axis (upper versus lower face) would appear dominant, especially during what would appear to be spontaneous breakthrough unplanned expressiveness. These data are congruent with the concept that the left cerebral hemisphere has control over nonemotionally stimulated speech; however, the multisynaptic brainstem extrapyramidal pathways may override hemiface laterality and preferentially take control of the upper face. Additionally, these data demonstrate the importance of the often-ignored brow in facial expressiveness. Experimental study. EBM levels not applicable.
Zhang, Feng; Chen, Zhong; Ouyang, Chao; Zhang, Yifei
Facial expression recognition is a currently active research topic in the fields of computer vision, pattern recognition and artificial intelligence. In this paper, we have developed a convolutional neural networks (CNN) for classifying human emotions from static facial expression into one of the seven facial emotion categories. We pre-train our CNN model on the combined FER2013 dataset formed by train, validation and test set and fine-tune on the extended Cohn-Kanade database. In order to reduce the overfitting of the models, we utilized different techniques including dropout and batch normalization in addition to data augmentation. According to the experimental result, our CNN model has excellent classification performance and robustness for facial expression recognition.
Olszanowski, M.; Pochwatko, G.; Kuklinski, K.; Scibor-Rylski, M.; Lewinski, P.; Ohme, R.K.
Emotional facial expressions play a critical role in theories of emotion and figure prominently in research on almost every aspect of emotion. This article provides a background for a new database of basic emotional expressions. The goal in creating this set was to provide high quality photographs
CARLOS FELIPE PARDO-VÉLEZ
Full Text Available Gender differences in the recognition of facial expressions of anger, happiness and sadness wereresearched in students 18-25 years of age. A reaction time procedure was used, and the percentage ofcorrect answers when recognizing was also measured. Though the work hypothesis expected genderdifferences in facial expression recognition, results suggest that these differences are not significant at alevel of 0.05%. Statistical analysis shows a greater easiness (at a non-significant level for women torecognize happiness expressions, and for men to recognize anger expressions. The implications ofthese data are discussed, and possible extensions of this investigation in terms of sample size andcollege major of the participants.
Muhammed Tayyib Kadak
Full Text Available Autism is a genetically transferred neurodevelopmental disorder characterized by severe and permanent deficits in many interpersonal relation areas like communication, social interaction and emotional responsiveness. Patients with autism have deficits in face recognition, eye contact and recognition of emotional expression. Both recognition of face and expression of facial emotion carried on face processing. Structural and functional impairment in fusiform gyrus, amygdala, superior temporal sulcus and other brain regions lead to deficits in recognition of face and facial emotion. Therefore studies suggest that face processing deficits resulted in problems in areas of social interaction and emotion in autism. Studies revealed that children with autism had problems in recognition of facial expression and used mouth region more than eye region. It was also shown that autistic patients interpreted ambiguous expressions as negative emotion. In autism, deficits related in various stages of face processing like detection of gaze, face identity, recognition of emotional expression were determined, so far. Social interaction impairments in autistic spectrum disorders originated from face processing deficits during the periods of infancy, childhood and adolescence. Recognition of face and expression of facial emotion could be affected either automatically by orienting towards faces after birth, or by “learning” processes in developmental periods such as identity and emotion processing. This article aimed to review neurobiological basis of face processing and recognition of emotional facial expressions during normal development and in autism.
Full Text Available The present paper explored the relationship between emotional facial response and electromyographic modulation in children when they observe facial expression of emotions. Facial responsiveness (evaluated by arousal and valence ratings and psychophysiological correlates (facial electromyography, EMG were analyzed when children looked at six facial expressions of emotions (happiness, anger, fear, sadness, surprise and disgust. About EMG measure, corrugator and zygomatic muscle activity was monitored in response to different emotional types. ANOVAs showed differences for both EMG and facial response across the subjects, as a function of different emotions. Specifically, some emotions were well expressed by all the subjects (such as happiness, anger and fear in terms of high arousal, whereas some others were less level arousal (such as sadness. Zygomatic activity was increased mainly for happiness, from one hand, corrugator activity was increased mainly for anger, fear and surprise, from the other hand. More generally, EMG and facial behavior were highly correlated each other, showing a “mirror” effect with respect of the observed faces.
Full Text Available Psychopathic individuals show selfish, manipulative, and antisocial behavior in addition to emotional detachment and reduced empathy. Their empathic deficits are thought to be associated with a reduced responsiveness to emotional stimuli. Immediate facial muscle responses to the emotional expressions of others reflect the expressive part of emotional responsiveness and are positively related to trait empathy. Empirical evidence for reduced facial muscle responses in adult psychopathic individuals to the emotional expressions of others is rare. In the present study, 261 male criminal offenders and non-offenders categorized dynamically presented facial emotion expressions (angry, happy, sad, and neutral during facial electromyography recording of their corrugator muscle activity. We replicated a measurement model of facial muscle activity, which controls for general facial responsiveness to face stimuli, and modeled three correlated emotion-specific factors (i.e., anger, happiness, and sadness representing emotion specific activity. In a multi-group confirmatory factor analysis, we compared the means of the anger, happiness, and sadness latent factors between three groups: 1 non-offenders, 2 low, and 3 high psychopathic offenders. There were no significant mean differences between groups. Our results challenge current theories that focus on deficits in emotional responsiveness as leading to the development of psychopathy and encourage further theoretical development on deviant emotional processes in psychopathic individuals.
Full Text Available Facial mimicry is the spontaneous response to others' facial expressions by mirroring or matching the interaction partner. Recent evidence suggested that mimicry may not be only an automatic reaction but could be dependent on many factors, including social context, type of task in which the participant is engaged, or stimulus properties (dynamic vs static presentation. In the present study, we investigated the impact of dynamic facial expression and sex differences on facial mimicry and judgment of emotional intensity. Electromyography recordings were recorded from the corrugator supercilii, zygomaticus major, and orbicularis oculi muscles during passive observation of static and dynamic images of happiness and anger. The ratings of the emotional intensity of facial expressions were also analysed. As predicted, dynamic expressions were rated as more intense than static ones. Compared to static images, dynamic displays of happiness also evoked stronger activity in the zygomaticus major and orbicularis oculi, suggesting that subjects experienced positive emotion. No muscles showed mimicry activity in response to angry faces. Moreover, we found that women exhibited greater zygomaticus major muscle activity in response to dynamic happiness stimuli than static stimuli. Our data support the hypothesis that people mimic positive emotions and confirm the importance of dynamic stimuli in some emotional processing.
Carolina Baptista Menezes
Full Text Available Recognizing emotional expressions is enabled by a fundamental sociocognitive mechanism of human nature. This study compared 114 women and 104 men on the identification of basic emotions on a recognition task that used culturally adapted and validated faces to the Brazilian context. It was also investigated whether gender differences on emotion recognition would vary according to different exposure times. Women were generally better at detecting facial expressions, but an interaction suggested that the female superiority was particularly observed for anger, disgust, and surprise; results did not change according to age or time exposure. However, regardless of sex, total accuracy improved as presentation times increased, but only fear and anger significantly differed between the presentation times. Hence, in addition to the support of the evolutionary hypothesis of the female superiority in detecting facial expressions of emotions, recognition of facial expressions also depend on the time available to correctly identify an expression.
Schneider, Kristin G; Hempel, Roelie J; Lynch, Thomas R
Successful interpersonal functioning often requires both the ability to mask inner feelings and the ability to accurately recognize others' expressions--but what if effortful control of emotional expressions impacts the ability to accurately read others? In this study, we examined the influence of self-controlled expressive suppression and mimicry on facial affect sensitivity--the speed with which one can accurately identify gradually intensifying facial expressions of emotion. Muscle activity of the brow (corrugator, related to anger), upper lip (levator, related to disgust), and cheek (zygomaticus, related to happiness) were recorded using facial electromyography while participants randomized to one of three conditions (Suppress, Mimic, and No-Instruction) viewed a series of six distinct emotional expressions (happiness, sadness, fear, anger, surprise, and disgust) as they morphed from neutral to full expression. As hypothesized, individuals instructed to suppress their own facial expressions showed impairment in facial affect sensitivity. Conversely, mimicry of emotion expressions appeared to facilitate facial affect sensitivity. Results suggest that it is difficult for a person to be able to simultaneously mask inner feelings and accurately "read" the facial expressions of others, at least when these expressions are at low intensity. The combined behavioral and physiological data suggest that the strategies an individual selects to control his or her own expression of emotion have important implications for interpersonal functioning.
Bloom, Elana; Heath, Nancy
Children with nonverbal learning disabilities (NVLD) have been found to be worse at recognizing facial expressions than children with verbal learning disabilities (LD) and without LD. However, little research has been done with adolescents. In addition, expressing and understanding facial expressions is yet to be studied among adolescents with LD…
Liu, Tongran; Xiao, Tong; Li, Xiaoyan
The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent...... males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two.......2). Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs) were obtained during the tasks. The visual mismatch negativity (vMMN) components were...
Pecchinenda, Anna; Petrucci, Manuel
Direction of eye gaze cues spatial attention, and typically this cueing effect is not modulated by the expression of a face unless top-down processes are explicitly or implicitly involved. To investigate the role of cognitive control on gaze cueing by emotional faces, participants performed a gaze cueing task with happy, angry, or neutral faces under high (i.e., counting backward by 7) or low cognitive load (i.e., counting forward by 2). Results show that high cognitive load enhances gaze cueing effects for angry facial expressions. In addition, cognitive load reduces gaze cueing for neutral faces, whereas happy facial expressions and gaze affected object preferences regardless of load. This evidence clearly indicates a differential role of cognitive control in processing gaze direction and facial expression, suggesting that under typical conditions, when we shift attention based on social cues from another person, cognitive control processes are used to reduce interference from emotional information.
Palermo, Romina; Willis, Megan L; Rivolta, Davide; McKone, Elinor; Wilson, C Ellie; Calder, Andrew J
We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and 'social'). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability. Copyright © 2011 Elsevier Ltd. All rights reserved.
Sato, Wataru; Sawada, Reiko; Uono, Shota; Yoshimura, Sayaka; Kochiyama, Takanori; Kubota, Yasutaka; Sakihama, Morimitsu; Toichi, Motomi
The detection of emotional facial expressions plays an indispensable role in social interaction. Psychological studies have shown that typically developing (TD) individuals more rapidly detect emotional expressions than neutral expressions. However, it remains unclear whether individuals with autistic phenotypes, such as autism spectrum disorder (ASD) and high levels of autistic traits (ATs), are impaired in this ability. We examined this by comparing TD and ASD individuals in Experiment 1 and individuals with low and high ATs in Experiment 2 using the visual search paradigm. Participants detected normal facial expressions of anger and happiness and their anti-expressions within crowds of neutral expressions. In Experiment 1, reaction times were shorter for normal angry expressions than for anti-expressions in both TD and ASD groups. This was also the case for normal happy expressions vs. anti-expressions in the TD group but not in the ASD group. Similarly, in Experiment 2, the detection of normal vs. anti-expressions was faster for angry expressions in both groups and for happy expressions in the low, but not high, ATs group. These results suggest that the detection of happy facial expressions is impaired in individuals with ASD and high ATs, which may contribute to their difficulty in creating and maintaining affiliative social relationships.
Duncan, Justin; Gosselin, Frédéric; Cobarro, Charlène; Dugas, Gabrielle; Blais, Caroline; Fiset, Daniel
Horizontal information was recently suggested to be crucial for face identification. In the present paper, we expand on this finding and investigate the role of orientations for all the basic facial expressions and neutrality. To this end, we developed orientation bubbles to quantify utilization of the orientation spectrum by the visual system in a facial expression categorization task. We first validated the procedure in Experiment 1 with a simple plaid-detection task. In Experiment 2, we used orientation bubbles to reveal the diagnostic-i.e., task relevant-orientations for the basic facial expressions and neutrality. Overall, we found that horizontal information was highly diagnostic for expressions-surprise excepted. We also found that utilization of horizontal information strongly predicted performance level in this task. Despite the recent surge of research on horizontals, the link with local features remains unexplored. We were thus also interested in investigating this link. In Experiment 3, location bubbles were used to reveal the diagnostic features for the basic facial expressions. Crucially, Experiments 2 and 3 were run in parallel on the same participants, in an interleaved fashion. This way, we were able to correlate individual orientation and local diagnostic profiles. Our results indicate that individual differences in horizontal tuning are best predicted by utilization of the eyes.
V A Barabanschikov
Full Text Available The authors discuss the problem of perception of facial expressions produced by configural features. Experimentally found configural features influence the perception of emotional expression of subjectively emotionless face. Classical results by E. Brunsvik related to perception of schematic faces are partly confirmed.
Background Problems with social-emotional processing are known to be an important contributor to the development and maintenance of eating disorders (EDs). Diminished facial communication of emotion has been frequently reported in individuals with anorexia nervosa (AN). Less is known about facial expressivity in bulimia nervosa (BN) and in people who have recovered from AN (RecAN). This study aimed to pilot the use of computerised facial expression analysis software to investigate emotion expression across the ED spectrum and recovery in a large sample of participants. Method 297 participants with AN, BN, RecAN, and healthy controls were recruited. Participants watched film clips designed to elicit happy or sad emotions, and facial expressions were then analysed using FaceReader. Results The finding mirrored those from previous work showing that healthy control and RecAN participants expressed significantly more positive emotions during the positive clip compared to the AN group. There were no differences in emotion expression during the sad film clip. Discussion These findings support the use of computerised methods to analyse emotion expression in EDs. The findings also demonstrate that reduced positive emotion expression is likely to be associated with the acute stage of AN illness, with individuals with BN showing an intermediate profile. PMID:28575109
Leppanen, Jenni; Dapelo, Marcela Marin; Davies, Helen; Lang, Katie; Treasure, Janet; Tchanturia, Kate
Problems with social-emotional processing are known to be an important contributor to the development and maintenance of eating disorders (EDs). Diminished facial communication of emotion has been frequently reported in individuals with anorexia nervosa (AN). Less is known about facial expressivity in bulimia nervosa (BN) and in people who have recovered from AN (RecAN). This study aimed to pilot the use of computerised facial expression analysis software to investigate emotion expression across the ED spectrum and recovery in a large sample of participants. 297 participants with AN, BN, RecAN, and healthy controls were recruited. Participants watched film clips designed to elicit happy or sad emotions, and facial expressions were then analysed using FaceReader. The finding mirrored those from previous work showing that healthy control and RecAN participants expressed significantly more positive emotions during the positive clip compared to the AN group. There were no differences in emotion expression during the sad film clip. These findings support the use of computerised methods to analyse emotion expression in EDs. The findings also demonstrate that reduced positive emotion expression is likely to be associated with the acute stage of AN illness, with individuals with BN showing an intermediate profile.
Begeer, Sander; Rieffe, Carolien; Terwogt, Mark Meerum; Stockmann, Lex
High-functioning children in the autism spectrum are frequently noted for their impaired attention to facial expressions of emotions. In this study, we examined whether attention to emotion cues in others could be enhanced in children with autism, by varying the relevance of children's attention to emotion expressions. Twenty-eight…
H. Dibeklioğ lu; A.A. Salah (Albert Ali); L. Akarun
htmlabstractAutomatic localization of 3D facial features is important for face recognition, tracking, modeling and expression analysis. Methods developed for 2D images were shown to have problems working across databases acquired with different illumination conditions. Expression variations, pose
Gordon, Iris; Pierce, Matthew D.; Bartlett, Marian S.; Tanaka, James W.
Children with autism spectrum disorder (ASD) show deficits in their ability to produce facial expressions. In this study, a group of children with ASD and IQ-matched, typically developing (TD) children were trained to produce "happy" and "angry" expressions with the FaceMaze computer game. FaceMaze uses an automated computer…
G. A. Kukharev
Full Text Available In the paper a method of generating of standard type linear barcodes from facial images is proposed. The method is based on use of the histogram of facial image brightness, averaging the histogram on a limited number of intervals, quantization of results in a range of decimal numbers from 0 to 9 and table conversion into the final barcode. The proposed solution is computationally low-cost and not requires the use of specialized software on image processing that allows generating of facial barcodes in mobile systems, and thus the proposed method can be interpreted as an express method. Results of tests on the Face94 and CUHK Face Sketch FERET Databases showed that the proposed method is a new solution for use in the real-world practice and ensures the stability of generated barcodes in changes of scale, pose and mirroring of a facial image, and also changes of a facial expression and shadows on faces from local lighting. The proposed method is based on generating of a standard barcode directly from the facial image, and thus contains the subjective information about a person's face.
Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija
The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.
Price, Tom F; Hortensius, Ruud; Harmon-Jones, Eddie
Past research associated relative left frontal cortical activity with positive affect and approach motivation, or the urge to move toward a stimulus. Less work has examined relative left frontal activity and positive emotions ranging from low to high approach motivation, to test whether positive affects that differ in approach motivational intensity influence relative left frontal cortical activity. Participants in the present experiment adopted determination (high approach positive), satisfaction (low approach positive), or neutral facial expressions while electroencephalographic (EEG) activity was recorded. Next, participants completed a task measuring motivational persistence behavior and then they completed self-report emotion questionnaires. Determination compared to satisfaction and neutral facial expressions caused greater relative left frontal activity relative to baseline EEG recordings. Facial expressions did not directly influence task persistence. However, relative left frontal activity correlated positively with persistence on insolvable tasks in the determination condition. These results extend embodiment theories and motivational interpretations of relative left frontal activity. Published by Elsevier B.V.
Keltner, D; Moffitt, T E; Stouthamer-Loeber, M
On the basis of the widespread belief that emotions underpin psychological adjustment, the authors tested 3 predicted relations between externalizing problems and anger, internalizing problems and fear and sadness, and the absence of externalizing problems and social-moral emotion (embarrassment). Seventy adolescent boys were classified into 1 of 4 comparison groups on the basis of teacher reports using a behavior problem checklist: internalizers, externalizers, mixed (both internalizers and externalizers), and nondisordered boys. The authors coded the facial expressions of emotion shown by the boys during a structured social interaction. Results supported the 3 hypotheses: (a) Externalizing adolescents showed increased facial expressions of anger, (b) on 1 measure internalizing adolescents showed increased facial expressions of fear, and (c) the absence of externalizing problems (or nondisordered classification) was related to increased displays of embarrassment. Discussion focused on the relations of these findings to hypotheses concerning the role of impulse control in antisocial behavior.
Full Text Available In this paper, an automatic emotion detection system is built for a computer or machine to detect the emotional state from facial expressions in human computer communication. Firstly, dynamic motion features are extracted from facial expression videos and then advanced machine learning methods for classification and regression are used to predict the emotional states. The system is evaluated on two publicly available datasets, i.e. GEMEP_FERA and AVEC2013, and satisfied performances are achieved in comparison with the baseline results provided. With this emotional state detection capability, a machine can read the facial expression of its user automatically. This technique can be integrated into applications such as smart robots, interactive games and smart surveillance systems.
Sonia M. Leach
Full Text Available This article contains data related to the research articles "Spatial and Temporal Analysis of Gene Expression during Growth and Fusion of the Mouse Facial Prominences" (Feng et al., 2009  and “Systems Biology of facial development: contributions of ectoderm and mesenchyme” (Hooper et al., 2017 In press . Embryonic mammalian craniofacial development is a complex process involving the growth, morphogenesis, and fusion of distinct facial prominences into a functional whole. Aberrant gene regulation during this process can lead to severe craniofacial birth defects, including orofacial clefting. As a means to understand the genes involved in facial development, we had previously dissected the embryonic mouse face into distinct prominences: the mandibular, maxillary or nasal between E10.5 and E12.5. The prominences were then processed intact, or separated into ectoderm and mesenchyme layers, prior analysis of RNA expression using microarrays (Feng et al., 2009, Hooper et al., 2017 in press [1,2]. Here, individual gene expression profiles have been built from these datasets that illustrate the timing of gene expression in whole prominences or in the separated tissue layers. The data profiles are presented as an indexed and clickable list of the genes each linked to a graphical image of that gene׳s expression profile in the ectoderm, mesenchyme, or intact prominence. These data files will enable investigators to obtain a rapid assessment of the relative expression level of any gene on the array with respect to time, tissue, prominence, and expression trajectory.
We spend much of our waking lives interacting with other people, reading their facial expressions to figure out what they might be feeling, thinking, or intending to do next (Ekman 1994; Fridlund 1994). At the same time, we also express our own feelings, thoughts, and intentions through facial
Full Text Available Face identity and facial expression are processed in two distinct neural pathways. However, most of the existing face adaptation literature studies them separately, despite the fact that they are two aspects from the same face. The current study conducted a systematic comparison between these two aspects by face adaptation, investigating how top- and bottom-half face parts contribute to the processing of face identity and facial expression. A real face (sad, “Adam” and its two size-equivalent face parts (top- and bottom-half were used as the adaptor in separate conditions. For face identity adaptation, the test stimuli were generated by morphing Adam's sad face with another person's sad face (“Sam”. For facial expression adaptation, the test stimuli were created by morphing Adam's sad face with his neutral face and morphing the neutral face with his happy face. In each trial, after exposure to the adaptor, observers indicated the perceived face identity or facial expression of the following test face via a key press. They were also tested in a baseline condition without adaptation. Results show that the top- and bottom-half face each generated a significant face identity aftereffect. However, the aftereffect by top-half face adaptation is much larger than that by the bottom-half face. On the contrary, only the bottom-half face generated a significant facial expression aftereffect. This dissociation of top- and bottom-half face adaptation suggests that face parts play different roles in face identity and facial expression. It thus provides further evidence for the distributed systems of face perception.
Hoffmann, Holger; Kessler, Henrik; Eppel, Tobias; Rukavina, Stefanie; Traue, Harald C
Two experiments were conducted in order to investigate the effect of expression intensity on gender differences in the recognition of facial emotions. The first experiment compared recognition accuracy between female and male participants when emotional faces were shown with full-blown (100% emotional content) or subtle expressiveness (50%). In a second experiment more finely grained analyses were applied in order to measure recognition accuracy as a function of expression intensity (40%-100%). The results show that although women were more accurate than men in recognizing subtle facial displays of emotion, there was no difference between male and female participants when recognizing highly expressive stimuli. Copyright © 2010 Elsevier B.V. All rights reserved.
Gerlicher, Anna M V; van Loon, Anouk M; Scholte, H Steven; Lamme, Victor A F; van der Leij, Andries R
In human social interactions, facial emotional expressions are a crucial source of information. Repeatedly presented information typically leads to an adaptation of neural responses. However, processing seems sustained with emotional facial expressions. Therefore, we tested whether sustained processing of emotional expressions, especially threat-related expressions, would attenuate neural adaptation. Neutral and emotional expressions (happy, mixed and fearful) of same and different identity were presented at 3 Hz. We used electroencephalography to record the evoked steady-state visual potentials (ssVEP) and tested to what extent the ssVEP amplitude adapts to the same when compared with different face identities. We found adaptation to the identity of a neutral face. However, for emotional faces, adaptation was reduced, decreasing linearly with negative valence, with the least adaptation to fearful expressions. This short and straightforward method may prove to be a valuable new tool in the study of emotional processing.
Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A
Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.
Assogna, Francesca; Pontieri, Francesco E; Caltagirone, Carlo; Spalletta, Gianfranco
A limited number of studies in Parkinson's Disease (PD) suggest a disturbance of recognition of facial emotion expressions. In particular, disgust recognition impairment has been reported in unmedicated and medicated PD patients. However, the results are rather inconclusive in the definition of the degree and the selectivity of emotion recognition impairment, and an associated impairment of almost all basic facial emotions in PD is also described. Few studies have investigated the relationship with neuropsychiatric and neuropsychological symptoms with mainly negative results. This inconsistency may be due to many different problems, such as emotion assessment, perception deficit, cognitive impairment, behavioral symptoms, illness severity and antiparkinsonian therapy. Here we review the clinical characteristics and neural structures involved in the recognition of specific facial emotion expressions, and the plausible role of dopamine transmission and dopamine replacement therapy in these processes. It is clear that future studies should be directed to clarify all these issues.
Karpouzis, Kostas; Tsapatsoulis, Nicolas; Kollias, Stefanos D.
Research in facial expression has concluded that at least six emotions, conveyed by human faces, are universally associated with distinct expressions. Sadness, anger, joy, fear, disgust and surprise are categories of expressions that are recognizable across cultures. In this work we form a relation between the description of the universal expressions and the MPEG-4 Facial Definition Parameter Set (FDP). We also investigate the relation between the movement of basic FDPs and the parameters that describe emotion-related words according to some classical psychological studies. In particular Whissel suggested that emotions are points in a space, which seem to occupy two dimensions: activation and evaluation. We show that some of the MPEG-4 Facial Animation Parameters (FAPs), approximated by the motion of the corresponding FDPs, can be combined by means of a fuzzy rule system to estimate the activation parameter. In this way variations of the six archetypal emotions can be achieved. Moreover, Plutchik concluded that emotion terms are unevenly distributed through the space defined by dimensions like Whissel's; instead they tend to form an approximately circular pattern, called 'emotion wheel,' modeled using an angular measure. The 'emotion wheel' can be defined as a reference for creating intermediate expressions from the universal ones, by interpolating the movement of dominant FDP points between neighboring basic expressions. By exploiting the relation between the movement of the basic FDP point and the activation and angular parameters we can model more emotions than the primary ones and achieve efficient recognition in video sequences.
Young, Steven G; Hugenberg, Kurt
The ability of the human face to communicate emotional states via facial expressions is well known, and past research has established the importance and universality of emotional facial expressions. However, recent evidence has revealed that facial expressions of emotion are most accurately recognized when the perceiver and expresser are from the same cultural ingroup. The current research builds on this literature and extends this work. Specifically, we find that mere social categorization, using a minimal-group paradigm, can create an ingroup emotion-identification advantage even when the culture of the target and perceiver is held constant. Follow-up experiments show that this effect is supported by differential motivation to process ingroup versus outgroup faces and that this motivational disparity leads to more configural processing of ingroup faces than of outgroup faces. Overall, the results point to distinct processing modes for ingroup and outgroup faces, resulting in differential identification accuracy for facial expressions of emotion. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Dimitri J Bayle
Full Text Available BACKGROUND: In everyday life, signals of danger, such as aversive facial expressions, usually appear in the peripheral visual field. Although facial expression processing in central vision has been extensively studied, this processing in peripheral vision has been poorly studied. METHODOLOGY/PRINCIPAL FINDINGS: Using behavioral measures, we explored the human ability to detect fear and disgust vs. neutral expressions and compared it to the ability to discriminate between genders at eccentricities up to 40°. Responses were faster for the detection of emotion compared to gender. Emotion was detected from fearful faces up to 40° of eccentricity. CONCLUSIONS: Our results demonstrate the human ability to detect facial expressions presented in the far periphery up to 40° of eccentricity. The increasing advantage of emotion compared to gender processing with increasing eccentricity might reflect a major implication of the magnocellular visual pathway in facial expression processing. This advantage may suggest that emotion detection, relative to gender identification, is less impacted by visual acuity and within-face crowding in the periphery. These results are consistent with specific and automatic processing of danger-related information, which may drive attention to those messages and allow for a fast behavioral reaction.
Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao
Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.
Koda, Tomoko; Rehm, Matthias; André, Elisabeth
The goal of the study is to investigate cultural differences in avatar expression evaluation and apply findings from psychological study in human facial expression recognition. Our previous study using Japanese designed avatars showed there are cultural differences in interpreting avatar facial...... expressions, and the psychological theory that suggests physical proximity affects facial expression recognition accuracy is also applicable to avatar facial expressions. This paper summarizes the early results of the successive experiment that uses western designed avatars. We observed tendencies of cultural...
Full Text Available The function of empathic concern to process pain is a product of evolutionary adaptation. Focusing on 5- to 6-year old children, the current study employed eye-tracking in an odd-one-out task (searching for the emotional facial expression among neutral expressions, N = 47 and a pain evaluation task (evaluating the pain intensity of a facial expression, N = 42 to investigate the relationship between children’s empathy and their behavioral and perceptual response to facial pain expression. We found children detected painful expression faster than others (angry, sad, and happy, children high in empathy performed better on searching facial expression of pain, and gave higher evaluation of pain intensity; and rating for pain in painful expressions was best predicted by a self-reported empathy score. As for eye-tracking in pain detection, children fixated on pain more quickly, less frequently and for shorter times. Of facial clues, children fixated on eyes and mouth more quickly, more frequently and for longer times. These results implied that painful facial expression was different from others in a cognitive sense, and children’s empathy might facilitate their search and make them perceive the intensity of observed pain on the higher side.
Guyer, Amanda E.; McClure, Erin B.; Adler, Abby D.; Brotman, Melissa A.; Rich, Brendan A.; Kimes, Alane S.; Pine, Daniel S.; Ernst, Monique; Leibenluft, Ellen
Background: We examined whether face-emotion labeling deficits are illness-specific or an epiphenomenon of generalized impairment in pediatric psychiatric disorders involving mood and behavioral dysregulation. Method: Two hundred fifty-two youths (7-18 years old) completed child and adult facial expression recognition subtests from the Diagnostic…
Camras, Linda A.
To make the point that infant emotions are more dynamic than suggested by Differential Emotions Theory, which maintains that infants show the same prototypical facial expressions for emotions as adults do, this paper explores two questions: (1) when infants experience an emotion, do they always show the corresponding prototypical facial…
Kuilenburg, H. van; Wiering, M.A.; Uyl, M. den
Automatic facial expression recognition is a research topic with interesting applications in the field of human-computer interaction, psychology and product marketing. The classification accuracy for an automatic system which uses static images as input is however largely limited by the image
Soleymani, Mohammad; Asghari-Esfeden, Sadjad; Pantic, Maja; Fu, Yun
Emotions play an important role in how we select and consume multimedia. Recent advances on affect detection are focused on detecting emotions continuously. In this paper, for the first time, we continuously detect valence from electroencephalogram (EEG) signals and facial expressions in response to
Full Text Available Recently, real-time facial expression recognition has attracted more and more research. In this study, an automatic facial expression real-time system was built and tested. Firstly, the system and model were designed and tested on a MATLAB environment followed by a MATLAB Simulink environment that is capable of recognizing continuous facial expressions in real-time with a rate of 1 frame per second and that is implemented on a desktop PC. They have been evaluated in a public dataset, and the experimental results were promising. The dataset and labels used in this study were made from videos, which were recorded twice from five participants while watching a video. Secondly, in order to implement in real-time at a faster frame rate, the facial expression recognition system was built on the field-programmable gate array (FPGA. The camera sensor used in this work was a Digilent VmodCAM — stereo camera module. The model was built on the Atlys™ Spartan-6 FPGA development board. It can continuously perform emotional state recognition in real-time at a frame rate of 30. A graphical user interface was designed to display the participant’s video in real-time and two-dimensional predict labels of the emotion at the same time.
Valstar, Michel F.; Jiang, Bihan; Mehu, Marc; Pantic, Maja; Scherer, Klaus
Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly
Hendriks, A.W.C.J.; Benson, P.J.; Jonkers, M.; Rietberg, S.
Whilst people with autism process many types of visual information at a level commensurate with their age, studies have shown that they have difficulty interpreting facial expressions. One of the reasons could be that autistics suffer from weak central coherence, ie a failure to integrate parts of
Zuckerman, Miron; Przewuzman, Sylvia J.
Preschool-age children drew, decoded, and encoded facial expressions depicting five different emotions. Accuracy of drawing, decoding and encoding each of the five emotions was consistent across the three tasks; decoding ability was correlated with drawing ability among female subjects, but neither of these abilities was correlated with encoding…
This dissertation on consumer resistance to enjoyable advertisements is positioned in the areas of persuasive communication and social psychology. In this thesis, it is argued that consumers can resist persuasion by controlling their facial expressions of emotion when exposed to an advertisement.
Butt, Muhammad Naeem; Iqbal, Mohammad
The major objective of the study was to explore teachers' perceptions about the importance of facial expression in the teaching-learning process. All the teachers of government secondary schools constituted the population of the study. A sample of 40 teachers, both male and female, in rural and urban areas of district Peshawar, were selected…
Wu, Tim; Hung, Alice; Mithraratne, Kumar
This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data.
Leppanen, Jukka M.; Richmond, Jenny; Vogel-Farley, Vanessa K.; Moulson, Margaret C.; Nelson, Charles A.
Categorical perception, demonstrated as reduced discrimination of within-category relative to between-category differences in stimuli, has been found in a variety of perceptual domains in adults. To examine the development of categorical perception in the domain of facial expression processing, we used behavioral and event-related potential (ERP)…
Cheal, Jenna L.; Rutherford, M. D.
Adults perceive emotional facial expressions categorically. In this study, we explored categorical perception in 3.5-year-olds by creating a morphed continuum of emotional faces and tested preschoolers' discrimination and identification of them. In the discrimination task, participants indicated whether two examples from the continuum "felt the…
Kostić Aleksandra P.
Full Text Available The results of a study on the accuracy of intensity ratings of emotion from facial expressions are reported. The so far research into the field has shown that spontaneous facial expressions of basic emotions are a reliable source of information about the category of emotion. The question is raised of whether this can be true for the intensity of emotion as well and whether the accuracy of intensity ratings is dependent on the observer’s sex and vocational orientation. A total of 228 observers of both sexes and of various vocational orientations rated the emotional intensity of presented facial expressions on a scale-range from 0 to 8. The results have supported the hypothesis that spontaneous facial expressions of basic emotions do provide sufficient information about emotional intensity. The hypothesis on the interdependence between the accuracy of intensity ratings of emotion and the observer’s sex and vocational orientation has not been confirmed. However, the accuracy of intensity rating has been proved to vary with the category of the emotion presented.
Blumberg, Samuel H.; Izard, Carroll E.
Examined the relations between emotion and facial expressions of emotion in 8- to 12-year-old male psychiatric patients. Results indicated that patterns or combinations of emotion experiences had an impact on facial expressions of emotion. (Author/BB)
Richoz, Anne-Raphaëlle; Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G; Caldara, Roberto
The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits
Rahim, Mohd Shafry Mohd; Rad, Abdolvahab Ehsani; Rehman, Amjad; Altameem, Ayman
Extreme expressions are really type of emotional expressions that are basically stimulated through the strong emotion. An example of those extreme expression is satisfied through tears. So to be able to provide these types of features; additional elements like fluid mechanism (particle system) plus some of physics techniques like (SPH) are introduced. The fusion of facile animation with SPH exhibits promising results. Accordingly, proposed fluid technique using facial animation is the real tenor for this research to get the complex expression, like laugh, smile, cry (tears emergence) or the sadness until cry strongly, as an extreme expression classification that's happens on the human face in some cases.
Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula
When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.
Chen, Chien-Chung; Cho, Shu-ling; Horszowska, Katarzyna; Chen, Mei-Yen; Wu, Chia-Ching; Chen, Hsueh-Chih; Yeh, Yi-Yu; Cheng, Chao-Min
We collected 6604 images of 30 models in eight types of facial expression: happiness, anger, sadness, disgust, fear, surprise, contempt and neutral. Among them, 406 most representative images from 12 models were rated by more than 200 human raters for perceived emotion category and intensity. Such large number of emotion categories, models and raters is sufficient for most serious expression recognition research both in psychology and in computer science. All the models and raters are of Asian background. Hence, this database can also be used when the culture background is a concern. In addition, 43 landmarks each of the 291 rated frontal view images were identified and recorded. This information should facilitate feature based research of facial expression. Overall, the diversity in images and richness in information should make our database and norm useful for a wide range of research.
Yoon, HeungSik; Kim, Shin Ah; Kim, Sang Hee
An individual's responses to emotional information are influenced not only by the emotional quality of the information, but also by the context in which the information is presented. We hypothesized that facial expressions of happiness and anger would serve as primes to modulate subjective and neural responses to subsequently presented negative information. To test this hypothesis, we conducted a functional MRI study in which the brains of healthy adults were scanned while they performed an emotion-rating task. During the task, participants viewed a series of negative and neutral photos, one at a time; each photo was presented after a picture showing a face expressing a happy, angry, or neutral emotion. Brain imaging results showed that compared with neutral primes, happy facial primes increased activation during negative emotion in the dorsal anterior cingulated cortex and the right ventrolateral prefrontal cortex, which are typically implicated in conflict detection and implicit emotion control, respectively. Conversely, relative to neutral primes, angry primes activated the right middle temporal gyrus and the left supramarginal gyrus during the experience of negative emotion. Activity in the amygdala in response to negative emotion was marginally reduced after exposure to happy primes compared with angry primes. Relative to neutral primes, angry facial primes increased the subjectively experienced intensity of negative emotion. The current study results suggest that prior exposure to facial expressions of emotions modulates the subsequent experience of negative emotion by implicitly activating the emotion-regulation system.
Axe, Judah B.; Evans, Christine J.
Children with autism spectrum disorders often exhibit delays in responding to facial expressions, and few studies have examined teaching responding to subtle facial expressions to this population. We used video modeling to train 3 participants with PDD-NOS (age 5) to respond to eight facial expressions: approval, bored, calming, disapproval,…
Sheaffer, Beverly L.; Golden, Jeannie A.; Averett, Paige
The ability to recognize facial expressions of emotion is integral in social interaction. Although the importance of facial expression recognition is reflected in increased research interest as well as in popular culture, clinicians may know little about this topic. The purpose of this article is to discuss facial expression recognition literature…
Harris, Richard J.; Young, Andrew W.; Andrews, Timothy J.
Whether the brain represents facial expressions as perceptual continua or as emotion categories remains controversial. Here, we measured the neural response to morphed images to directly address how facial expressions of emotion are represented in the brain. We found that face-selective regions in the posterior superior temporal sulcus and the amygdala responded selectively to changes in facial expression, independent of changes in identity. We then asked whether the responses in these regions reflected categorical or continuous neural representations of facial expression. Participants viewed images from continua generated by morphing between faces posing different expressions such that the expression could be the same, could involve a physical change but convey the same emotion, or could differ by the same physical amount but be perceived as two different emotions. We found that the posterior superior temporal sulcus was equally sensitive to all changes in facial expression, consistent with a continuous representation. In contrast, the amygdala was only sensitive to changes in expression that altered the perceived emotion, demonstrating a more categorical representation. These results offer a resolution to the controversy about how facial expression is processed in the brain by showing that both continuous and categorical representations underlie our ability to extract this important social cue. PMID:23213218
Alvino, Christopher; Kohler, Christian; Barrett, Frederick; Gur, Raquel E; Gur, Ruben C; Verma, Ragini
Deficits in the ability to express emotions characterize several neuropsychiatric disorders and are a hallmark of schizophrenia, and there is need for a method of quantifying expression, which is currently done by clinical ratings. This paper presents the development and validation of a computational framework for quantifying emotional expression differences between patients with schizophrenia and healthy controls. Each face is modeled as a combination of elastic regions, and expression changes are modeled as a deformation between a neutral face and an expressive face. Functions of these deformations, known as the regional volumetric difference (RVD) functions, form distinctive quantitative profiles of expressions. Employing pattern classification techniques, we have designed expression classifiers for the four universal emotions of happiness, sadness, anger and fear by training on RVD functions of expression changes. The classifiers were cross-validated and then applied to facial expression images of patients with schizophrenia and healthy controls. The classification score for each image reflects the extent to which the expressed emotion matches the intended emotion. Group-wise statistical analysis revealed this score to be significantly different between healthy controls and patients, especially in the case of anger. This score correlated with clinical severity of flat affect. These results encourage the use of such deformation based expression quantification measures for research in clinical applications that require the automated measurement of facial affect.
The present study of 138 participants explored how facial expressions and gender stereotypes influence impressions. It was predicted that images of smiling women would be evaluated more favorably on traits reflecting warmth, and that images of non-smiling men would be evaluated more favorably on traits reflecting competence. As predicted, smiling female faces were rated as more warm; however, contrary to prediction, perceived competence of male faces was not affected by facial expression. Participants' female stereotype endorsement was a significant predictor for evaluations of female faces; those who ascribed more strongly to traditional female stereotypes reported the most positive impressions of female faces displaying a smiling expression. However, a similar effect was not found for images of men; endorsement of traditional male stereotypes did not predict participants' impressions of male faces.
Liu, Tongran; Xiao, Tong; Jiannong, Shi
Adolescence is a critical period for the neurodevelopment of social-emotional processing, wherein the automatic detection of changes in facial expressions is crucial for the development of interpersonal communication. Two groups of participants (an adolescent group and an adult group) were...... in facial expressions between the two age groups. The current findings demonstrated that the adolescent group featured more negative vMMN amplitudes than the adult group in the fronto-central region during the 120–200 ms interval. During the time window of 370–450 ms, only the adult group showed better...... automatic processing on fearful faces than happy faces. The present study indicated that adolescent’s posses stronger automatic detection of changes in emotional expression relative to adults, and sheds light on the neurodevelopment of automatic processes concerning social-emotional information....
Wagshum Techane Asheber
Full Text Available Recently a social robot for daily life activities is becoming more common. To this end a humanoid robot with realistic facial expression is a strong candidate for common chores. In this paper, the development of a humanoid face mechanism with a simplified system complexity to generate human like facial expression is presented. The distinctive feature of this face robot is the use of significantly fewer actuators. Only three servo motors for facial expressions and five for the rest of the head motions have been used. This leads to effectively low energy consumption, making it suitable for applications such as mobile humanoid robots. Moreover, the modular design makes it possible to have as many face appearances as needed on one structure. The mechanism allows expansion to generate more expressions without addition or alteration of components. The robot is also equipped with an audio system and camera inside each eyeball, consequently hearing and vision sensibility are utilized in localization, communication and enhancement of expression exposition processes.
Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J
Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.
Jack, Rachael E; Garrod, Oliver G B; Yu, Hui; Caldara, Roberto; Schyns, Philippe G
Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars.
Full Text Available Effects of facial expressions on recognizing emotions expressed in dance movements were investigated. Dancers expressed three emotions: joy, sadness, and anger through dance movements. We used digital video cameras and a 3D motion capturing system to record and capture the movements. We then created full-video displays with an expressive face, full-video displays with an unexpressive face, stick figure displays (no face, or point-light displays (no face from these data using 3D animation software. To make point-light displays, 13 markers were attached to the body of each dancer. We examined how accurately observers were able to identify the expression that the dancers intended to create through their dance movements. Dance experienced and inexperienced observers participated in the experiment. They watched the movements and rated the compatibility of each emotion with each movement on a 5-point Likert scale. The results indicated that both experienced and inexperienced observers could identify all the emotions that dancers intended to express. Identification scores for dance movements with an expressive face were higher than for other expressions. This finding indicates that facial expressions affect the identification of emotions in dance movements, whereas only bodily expressions provide sufficient information to recognize emotions.
Liang, Yin; Liu, Baolin; Li, Xianglin; Wang, Peiyuan
It is an important question how human beings achieve efficient recognition of others' facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition.
Kayyal, Mary H; Russell, James A
The claim that certain emotions are universally recognized from facial expressions is based primarily on the study of expressions that were posed. The current study was of spontaneous facial expressions shown by aborigines in Papua New Guinea (Ekman, 1980); 17 faces claimed to convey one (or, in the case of blends, two) basic emotions and five faces claimed to show other universal feelings. For each face, participants rated the degree to which each of the 12 predicted emotions or feelings was conveyed. The modal choice for English-speaking Americans (n = 60), English-speaking Palestinians (n = 60), and Arabic-speaking Palestinians (n = 44) was the predicted label for only 4, 5, and 4, respectively, of the 17 faces for basic emotions, and for only 2, 2, and 2, respectively, of the 5 faces for other feelings. Observers endorsed the predicted emotion or feeling moderately often (65%, 55%, and 44%), but also denied it moderately often (35%, 45%, and 56%). They also endorsed more than one (or, for blends, two) label(s) in each face-on average, 2.3, 2.3, and 1.5 of basic emotions and 2.6, 2.2, and 1.5 of other feelings. There were both similarities and differences across culture and language, but the emotional meaning of a facial expression is not well captured by the predicted label(s) or, indeed, by any single label.
Beer, Jenay M.; Smarr, Cory-Ann; Fisk, Arthur D.; Rogers, Wendy A.
As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent’s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell, Sullivan, Prevost, & Churchill, 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck & Reichenbach, 2005; Courgeon et al. 2009; 2011; Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent’s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a
Lin, Yuxu; Song, Mingli; Quynh, Dao Thi Phuong; He, Ying; Chen, Chun
Computer animation researchers have been extensively investigating 3D facial-expression synthesis for decades. However, flexible, robust production of realistic 3D facial expressions is still technically challenging. A proposed modeling framework applies sparse coding to synthesize 3D expressive faces, using specified coefficients or expression examples. It also robustly recovers facial expressions from noisy and incomplete data. This approach can synthesize higher-quality expressions in less time than the state-of-the-art techniques.
Barabanschikov, Vladimir A
We studied preferably fixated parts and features of human face in the process of recognition of facial expressions of emotion. Photographs of facial expressions were used. Participants were to categorize these as basic emotions; during this process, eye movements were registered. It was found that variation in the intensity of an expression is mirrored in accuracy of emotion recognition; it was also reflected by several indices of oculomotor function: duration of inspection of certain areas of the face, its upper and bottom or right parts, right and left sides; location, number and duration of fixations, viewing trajectory. In particular, for low-intensity expressions, right side of the face was found to be attended predominantly (right-side dominance); the right-side dominance effect, was, however, absent for expressions of high intensity. For both low- and high-intensity expressions, upper face part was predominantly fixated, though with greater fixation of high-intensity expressions. The majority of trials (70%), in line with findings in previous studies, revealed a V-shaped pattern of inspection trajectory. No relationship, between accuracy of recognition of emotional expressions, was found, though, with either location and duration of fixations or pattern of gaze directedness in the face. © The Author(s) 2015.
Flexas, Albert; Rosselló, Jaume; Christensen, Julia F; Nadal, Marcos; Olivera La Rosa, Antonio; Munar, Enric
We examined the influence of affective priming on the appreciation of abstract artworks using an evaluative priming task. Facial primes (showing happiness, disgust or no emotion) were presented under brief (Stimulus Onset Asynchrony, SOA = 20 ms) and extended (SOA = 300 ms) conditions. Differences in aesthetic liking for abstract paintings depending on the emotion expressed in the preceding primes provided a measure of the priming effect. The results showed that, for the extended SOA, artworks were liked more when preceded by happiness primes and less when preceded by disgust primes. Facial expressions of happiness, though not of disgust, exerted similar effects in the brief SOA condition. Subjective measures and a forced-choice task revealed no evidence of prime awareness in the suboptimal condition. Our results are congruent with findings showing that the affective transfer elicited by priming biases evaluative judgments, extending previous research to the domain of aesthetic appreciation.
Full Text Available We examined the influence of affective priming on the appreciation of abstract artworks using an evaluative priming task. Facial primes (showing happiness, disgust or no emotion were presented under brief (Stimulus Onset Asynchrony, SOA = 20 ms and extended (SOA = 300 ms conditions. Differences in aesthetic liking for abstract paintings depending on the emotion expressed in the preceding primes provided a measure of the priming effect. The results showed that, for the extended SOA, artworks were liked more when preceded by happiness primes and less when preceded by disgust primes. Facial expressions of happiness, though not of disgust, exerted similar effects in the brief SOA condition. Subjective measures and a forced-choice task revealed no evidence of prime awareness in the suboptimal condition. Our results are congruent with findings showing that the affective transfer elicited by priming biases evaluative judgments, extending previous research to the domain of aesthetic appreciation.
Choi, Jeong-Won; Kim, Kiho; Lee, Jang-Han
According to the New Look theory, size perception is affected by emotional factors. Although previous studies have attempted to explain the effects of both emotion and motivation on size perception, they have failed to identify the underlying mechanisms. This study aimed to investigate the underlying mechanisms of size perception by applying attention toward facial expressions using the Ebbinghaus illusion as a measurement tool. The participants, female university students, were asked to judge the size of a target stimulus relative to the size of facial expressions (i.e., happy, angry, and neutral) surrounding the target. The results revealed that the participants perceived angry and neutral faces to be larger than happy faces. This finding indicates that individuals pay closer attention to neutral and angry faces than happy ones. These results suggest that the mechanisms underlying size perception involve cognitive processes that focus attention toward relevant stimuli and block out irrelevant stimuli.
El Haj, Mohamad; Daoudi, Mohamed; Gallouj, Karim; Moustafa, Ahmed A; Nandrino, Jean-Louis
Thanks to the current advances in the software analysis of facial expressions, there is a burgeoning interest in understanding emotional facial expressions observed during the retrieval of autobiographical memories. This review describes the research on facial expressions during autobiographical retrieval showing distinct emotional facial expressions according to the characteristics of retrieved memoires. More specifically, this research demonstrates that the retrieval of emotional memories can trigger corresponding emotional facial expressions (e.g. positive memories may trigger positive facial expressions). Also, this study demonstrates the variations of facial expressions according to specificity, self-relevance, or past versus future direction of memory construction. Besides linking research on facial expressions during autobiographical retrieval to cognitive and affective characteristics of autobiographical memory in general, this review positions this research within the broader context research on the physiologic characteristics of autobiographical retrieval. We also provide several perspectives for clinical studies to investigate facial expressions in populations with deficits in autobiographical memory (e.g. whether autobiographical overgenerality in neurologic and psychiatric populations may trigger few emotional facial expressions). In sum, this review paper demonstrates how the evaluation of facial expressions during autobiographical retrieval may help understand the functioning and dysfunctioning of autobiographical memory.
Lawi, Armin; Sya'Rani Machrizzandi, M.
Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.
Bui, T.D.; Heylen, Dirk K.J.; Nijholt, Antinus; Badler, N.; Thalmann, D.
Facial expressions play an important role in face-to-face communication. With the development of personal computers capable of rendering high quality graphics, computer facial animation has produced more and more realistic facial expressions to enrich human-computer communication. In this paper, we
Gladilin, Evgeny; Zachow, Stefan; Deuflhard, Peter; Hege, Hans-Christian
In addition to the static soft tissue prediction, the estimation of individual facial emotion expressions is an important criterion for the evaluation of the carniofacial surgery planning. In this paper, we present an approach for the estimation of individual facial emotion expressions on the basis of geometrical models of human anatomy derived from tomographic data and the finite element modeling of facial tissue biomechanics.
Balconi, Michela; Carrera, Alba
The paper explored conceptual and lexical skills with regard to emotional correlates of facial stimuli and scripts. In two different experimental phases normal and autistic children observed six facial expressions of emotions (happiness, anger, fear, sadness, surprise, and disgust) and six emotional scripts (contextualized facial expressions). In…
Chen, Chaona; Jack, Rachael E
Understanding the cultural commonalities and specificities of facial expressions of emotion remains a central goal of Psychology. However, recent progress has been stayed by dichotomous debates (e.g. nature versus nurture) that have created silos of empirical and theoretical knowledge. Now, an emerging interdisciplinary scientific culture is broadening the focus of research to provide a more unified and refined account of facial expressions within and across cultures. Specifically, data-driven approaches allow a wider, more objective exploration of face movement patterns that provide detailed information ontologies of their cultural commonalities and specificities. Similarly, a wider exploration of the social messages perceived from face movements diversifies knowledge of their functional roles (e.g. the 'fear' face used as a threat display). Together, these new approaches promise to diversify, deepen, and refine knowledge of facial expressions, and deliver the next major milestones for a functional theory of human social communication that is transferable to social robotics. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
The author's purpose was to examine children's recognition of emotional facial expressions, by comparing two types of stimulus: photographs and drawings. The author aimed to investigate whether drawings could be considered as a more evocative material than photographs, as a function of age and emotion. Five- and 7-year-old children were presented with photographs and drawings displaying facial expressions of 4 basic emotions (i.e., happiness, sadness, anger, and fear) and were asked to perform a matching task by pointing to the face corresponding to the target emotion labeled by the experimenter. The photographs we used were selected from the Radboud Faces Database and the drawings were designed on the basis of both the facial components involved in the expression of these emotions and the graphic cues children tend to use when asked to depict these emotions in their own drawings. Our results show that drawings are better recognized than photographs, for sadness, anger, and fear (with no difference for happiness, due to a ceiling effect). And that the difference between the 2 types of stimuli tends to be more important for 5-year-olds compared to 7-year-olds. These results are discussed in view of their implications, both for future research and for practical application.
Full Text Available Facial expression is an essential part of communication. For this reason, the issue of human emotions evaluation using a computer is a very interesting topic, which has gained more and more attention in recent years. It is mainly related to the possibility of applying facial expression recognition in many fields such as HCI, video games, virtual reality, and analysing customer satisfaction etc. Emotions determination (recognition process is often performed in 3 basic phases: face detection, facial features extraction, and last stage - expression classification. Most often you can meet the so-called Ekman’s classification of 6 emotional expressions (or 7 - neutral expression as well as other types of classification - the Russell circular model, which contains up to 24 or the Plutchik’s Wheel of Emotions. The methods used in the three phases of the recognition process have not only improved over the last 60 years, but new methods and algorithms have also emerged that can determine the ViolaJones detector with greater accuracy and lower computational demands. Therefore, there are currently various solutions in the form of the Software Development Kit (SDK. In this publication, we point to the proposition and creation of our system for real-time emotion classification. Our intention was to create a system that would use all three phases of the recognition process, work fast and stable in real time. That’s why we’ve decided to take advantage of existing Affectiva SDKs. By using the classic webcamera we can detect facial landmarks on the image automatically using the Software Development Kit (SDK from Affectiva. Geometric feature based approach is used for feature extraction. The distance between landmarks is used as a feature, and for selecting an optimal set of features, the brute force method is used. The proposed system uses neural network algorithm for classification. The proposed system recognizes 6 (respectively 7 facial expressions
Full Text Available To address whether the recognition of emotional facial expressions is impaired in schizophrenia across different cultures, patients with schizophrenia and age-matched normal controls in France and Japan were tested with a labeling task of emotional facial expressions and a matching task of unfamiliar faces. Schizophrenia patients in both France and Japan were less accurate in labeling fearful facial expressions. There was no correlation between the scores of facial emotion labeling and face matching. These results suggest that the impaired recognition of emotional facial expressions in schizophrenia is common across different cultures.
Full Text Available Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the speciﬁcity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU weighting and patch weight optimization is proposed to represent the speciﬁcity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+ databases, respectively. Better cross-database performance has also been observed.
Olderbak, Sally; Hildebrandt, Andrea; Pinkpank, Thomas; Sommer, Werner; Wilhelm, Oliver
Coding of facial emotion expressions is increasingly performed by automated emotion expression scoring software; however, there is limited discussion on how best to score the resulting codes. We present a discussion of facial emotion expression theories and a review of contemporary emotion expression coding methodology. We highlight methodological challenges pertinent to scoring software-coded facial emotion expression codes and present important psychometric research questions centered on co...
Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing
Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.
El Haj, Mohamad; Antoine, Pascal; Nandrino, Jean Louis
There is a substantial body of research on the relationship between emotion and autobiographical memory. Using facial analysis software, our study addressed this relationship by investigating basic emotional facial expressions that may be detected during autobiographical recall. Participants were asked to retrieve 3 autobiographical memories, each of which was triggered by one of the following cue words: happy, sad, and city. The autobiographical recall was analyzed by a software for facial analysis that detects and classifies basic emotional expressions. Analyses showed that emotional cues triggered the corresponding basic facial expressions (i.e., happy facial expression for memories cued by happy). Furthermore, we dissociated episodic and semantic retrieval, observing more emotional facial expressions during episodic than during semantic retrieval, regardless of the emotional valence of cues. Our study provides insight into facial expressions that are associated with emotional autobiographical memory. It also highlights an ecological tool to reveal physiological changes that are associated with emotion and memory.
Yang, Fei; Zhang, Qian; Zheng, Chi; Qiu, Guoping
In the computer research area, facial expression recognition is a hot research problem. Recent years, the research has moved from the lab environment to in-the-wild circumstances. It is challenging, especially under extreme poses. But current expression detection systems are trying to avoid the pose effects and gain the general applicable ability. In this work, we solve the problem in the opposite approach. We consider the head poses and detect the expressions within special head poses. Our work includes two parts: detect the head pose and group it into one pre-defined head pose class; do facial expression recognize within each pose class. Our experiments show that the recognition results with pose class grouping are much better than that of direct recognition without considering poses. We combine the hand-crafted features, SIFT, LBP and geometric feature, with deep learning feature as the representation of the expressions. The handcrafted features are added into the deep learning framework along with the high level deep learning features. As a comparison, we implement SVM and random forest to as the prediction models. To train and test our methodology, we labeled the face dataset with 6 basic expressions.
Full Text Available Background: Recognition of emotional facial expressions is one of the psychological factors which involve in obsessive-compulsive disorder (OCD and major depressive disorder (MDD. The aim of present study was to compare the ability of recognizing emotional facial expressions in patients with Obsessive-Compulsive Disorder and major depressive disorder. Materials and Methods: The present study is a cross-sectional and ex-post facto investigation (causal-comparative method. Forty participants (20 patients with OCD, 20 patients with MDD were selected through available sampling method from the clients referred to Tabriz Bozorgmehr clinic. Data were collected through Structured Clinical Interview and Recognition of Emotional Facial States test. The data were analyzed utilizing MANOVA. Results: The obtained results showed that there is no significant difference between groups in the mean score of recognition emotional states of surprise, sadness, happiness and fear; but groups had a significant difference in the mean score of diagnosing disgust and anger states (p<0.05. Conclusion: Patients suffering from both OCD and MDD show equal ability to recognize surprise, sadness, happiness and fear. However, the former are less competent in recognizing disgust and anger than the latter.
Full Text Available Perception, cognition, and emotion do not operate along segregated pathways; rather, their adaptive interaction is supported by various sources of evidence. For instance, the aesthetic appraisal of powerful mood inducers like music can bias the facial expression of emotions towards mood congruency. In four experiments we showed similar mood-congruency effects elicited by the comfort/discomfort of body actions. Using a novel Motor Action Mood Induction Procedure, we let participants perform comfortable/uncomfortable visually-guided reaches and tested them in a facial emotion identification task. Through the alleged mediation of motor action induced mood, action comfort enhanced the quality of the participant's global experience (a neutral face appeared happy and a slightly angry face neutral, while action discomfort made a neutral face appear angry and a slightly happy face neutral. Furthermore, uncomfortable (but not comfortable reaching improved the sensitivity for the identification of emotional faces and reduced the identification time of facial expressions, as a possible effect of hyper-arousal from an unpleasant bodily experience.
Amijoyo Mochtar, Andi
Applications of robotics have become important for human life in recent years. There are many specification of robots that have been improved and encriched with the technology advances. One of them are humanoid robot with facial expression which closer with the human facial expression naturally. The purpose of this research is to make computation on facial expressions and conduct the tensile strength for silicone rubber as artificial skin. Facial expressions were calculated by determining dimension, material properties, number of node elements, boundary condition, force condition, and analysis type. A Facial expression robot is determined by the direction and the magnitude external force on the driven point. The expression face of robot is identical with the human facial expression where the muscle structure in face according to the human face anatomy. For developing facial expression robots, facial action coding system (FACS) in approached due to follow expression human. The tensile strength is conducting due to check the proportional force of artificial skin that can be applied on the future of robot facial expression. Combining of calculated and experimental results can generate reliable and sustainable robot facial expression that using silicone rubber as artificial skin.
Guo, Kun; Soornack, Yoshi; Settle, Rebecca
Our capability of recognizing facial expressions of emotion under different viewing conditions implies the existence of an invariant expression representation. As natural visual signals are often distorted and our perceptual strategy changes with external noise level, it is essential to understand how expression perception is susceptible to face distortion and whether the same facial cues are used to process high- and low-quality face images. We systematically manipulated face image resolution (experiment 1) and blur (experiment 2), and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. Our analysis revealed a reasonable tolerance to face distortion in expression perception. Reducing image resolution up to 48 × 64 pixels or increasing image blur up to 15 cycles/image had little impact on expression assessment and associated gaze behaviour. Further distortion led to decreased expression categorization accuracy and intensity rating, increased reaction time and fixation duration, and stronger central fixation bias which was not driven by distortion-induced changes in local image saliency. Interestingly, the observed distortion effects were expression-dependent with less deterioration impact on happy and surprise expressions, suggesting this distortion-invariant facial expression perception might be achieved through the categorical model involving a non-linear configural combination of local facial features. Copyright © 2018 Elsevier Ltd. All rights reserved.
Jospe, Karine; Flöel, Agnes; Lavidor, Michal
Previous research has demonstrated that the Action-Observation Network (AON) is involved in both emotional-embodiment (empathy) and action-embodiment mechanisms. In this study, we hypothesized that interfering with the AON will impair action recognition and that this impairment will be modulated by empathy levels. In Experiment 1 (n = 90), participants were asked to recognize facial expressions while their facial motion was restricted. In Experiment 2 (n = 50), we interfered with the AON by applying transcranial Direct Current Stimulation to the motor cortex. In both experiments, we found that interfering with the AON impaired the performance of participants with high empathy levels; however, for the first time, we demonstrated that the interference enhanced the performance of participants with low empathy. This novel finding suggests that the embodiment module may be flexible, and that it can be enhanced in individuals with low empathy by simple manipulation of motor activation.
De Winter, François-Laurent; Zhu, Qi; Van den Stock, Jan; Nelissen, Koen; Peeters, Ronald; de Gelder, Beatrice; Vanduffel, Wim; Vandenbulcke, Mathieu
Most face processing studies in humans show stronger activation in the right compared to the left hemisphere. Evidence is largely based on studies with static stimuli focusing on the fusiform face area (FFA). Hence, the pattern of lateralization for dynamic faces is less clear. Furthermore, it is unclear whether this property is common to human and non-human primates due to predisposing processing strategies in the right hemisphere or that alternatively left sided specialization for language in humans could be the driving force behind this phenomenon. We aimed to address both issues by studying lateralization for dynamic facial expressions in monkeys and humans. Therefore, we conducted an event-related fMRI experiment in three macaques and twenty right handed humans. We presented human and monkey dynamic facial expressions (chewing and fear) as well as scrambled versions to both species. We studied lateralization in independently defined face-responsive and face-selective regions by calculating a weighted lateralization index (LIwm) using a bootstrapping method. In order to examine if lateralization in humans is related to language, we performed a separate fMRI experiment in ten human volunteers including a 'speech' expression (one syllable non-word) and its scrambled version. Both within face-responsive and selective regions, we found consistent lateralization for dynamic faces (chewing and fear) versus scrambled versions in the right human posterior superior temporal sulcus (pSTS), but not in FFA nor in ventral temporal cortex. Conversely, in monkeys no consistent pattern of lateralization for dynamic facial expressions was observed. Finally, LIwms based on the contrast between different types of dynamic facial expressions (relative to scrambled versions) revealed left-sided lateralization in human pSTS for speech-related expressions compared to chewing and emotional expressions. To conclude, we found consistent laterality effects in human posterior STS but not
Trent D Weston
Full Text Available Although the body weight evaluation (e.g., normal or overweight of others relies on perceptual impressions, it also can be influenced by other psychosocial factors. In this study, we explored the effect of task-irrelevant emotional facial expressions on judgments of body weight and the relationship between emotion-induced weight judgment bias and other psychosocial variables including attitudes towards obese person. Forty-four participants were asked to quickly make binary body weight decisions for 960 randomized sad and neutral faces of varying weight levels presented on a computer screen. The results showed that sad facial expressions systematically decreased the decision threshold of overweight judgments for male faces. This perceptual decision bias by emotional expressions was positively correlated with the belief that being overweight is not under the control of obese persons. Our results provide experimental evidence that task-irrelevant emotional expressions can systematically change the decision threshold for weight judgments, demonstrating that sad expressions can make faces appear more overweight than they would otherwise be judged.
Muhammad Hameed Siddiqi
Full Text Available Over the last decade, human facial expressions recognition (FER has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difﬁcult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, ﬁnally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER.
Siddiqi, Muhammad Hameed; Lee, Sungyoung; Lee, Young-Koo; Khan, Adil Mehmood; Truc, Phan Tran Ho
Over the last decade, human facial expressions recognition (FER) has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER) system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER. PMID:24316568
Bandini, Andrea; Orlandi, Silvia; Escalante, Hugo Jair; Giovannelli, Fabio; Cincotta, Massimo; Reyes-Garcia, Carlos A; Vanni, Paola; Zaccara, Gaetano; Manfredi, Claudia
The automatic analysis of facial expressions is an evolving field that finds several clinical applications. One of these applications is the study of facial bradykinesia in Parkinson's disease (PD), which is a major motor sign of this neurodegenerative illness. Facial bradykinesia consists in the reduction/loss of facial movements and emotional facial expressions called hypomimia. In this work we propose an automatic method for studying facial expressions in PD patients relying on video-based METHODS: 17 Parkinsonian patients and 17 healthy control subjects were asked to show basic facial expressions, upon request of the clinician and after the imitation of a visual cue on a screen. Through an existing face tracker, the Euclidean distance of the facial model from a neutral baseline was computed in order to quantify the changes in facial expressivity during the tasks. Moreover, an automatic facial expressions recognition algorithm was trained in order to study how PD expressions differed from the standard expressions. Results show that control subjects reported on average higher distances than PD patients along the tasks. This confirms that control subjects show larger movements during both posed and imitated facial expressions. Moreover, our results demonstrate that anger and disgust are the two most impaired expressions in PD patients. Contactless video-based systems can be important techniques for analyzing facial expressions also in rehabilitation, in particular speech therapy, where patients could get a definite advantage from a real-time feedback about the proper facial expressions/movements to perform. Copyright © 2017 Elsevier B.V. All rights reserved.
Song, Mingli; Tao, Dacheng; Sun, Shengpeng; Chen, Chun; Bu, Jiajun
3-D facial expression generation, including synthesis and retargeting, has received intensive attentions in recent years, because it is important to produce realistic 3-D faces with specific expressions in modern film production and computer games. In this paper, we present joint sparse learning (JSL) to learn mapping functions and their respective inverses to model the relationship between the high-dimensional 3-D faces (of different expressions and identities) and their corresponding low-dimensional representations. Based on JSL, we can effectively and efficiently generate various expressions of a 3-D face by either synthesizing or retargeting. Furthermore, JSL is able to restore 3-D faces with holes by learning a mapping function between incomplete and intact data. Experimental results on a wide range of 3-D faces demonstrate the effectiveness of the proposed approach by comparing with representative ones in terms of quality, time cost, and robustness.
Bernard M.C. Stienen
Full Text Available BackgroundSocial interaction depends on a multitude of signals carrying information about the emotional state of others. Past research has focused on the perception of facial expressions while perception of whole body signals has only been studied recently. The relative importance of facial and bodily signals is still poorly understood. In order to better understand the relative contribution of affective signals from the face only or from the rest of the body we used a binocular rivalry experiment. This method seems to be perfectly suitable to contrast two classes of stimuli to test our processing sensitivity to either stimulus and to address the question how emotion modulates this sensitivity. We report in this paper two behavioral experiments addressing these questions.MethodIn the first experiment we directly contrasted fearful, angry and neutral bodies and faces. We always presented bodies in one eye and faces in the other simultaneously for 60 seconds and asked participants to report what they perceived. In the second experiment we focused specifically on the role of fearful expressions of faces and bodies.ResultsTaken together the two experiments show that there is no clear bias towards either the face or body when the expression of the body and face are neutral or angry. However, the perceptual dominance in favor of either the face of the body is a function of the stimulus class expressing fear.
Eack, Shaun M.; Mazefsky, Carla A.; Minshew, Nancy J.
Facial emotion perception is significantly affected in autism spectrum disorder, yet little is known about how individuals with autism spectrum disorder misinterpret facial expressions that result in their difficulty in accurately recognizing emotion in faces. This study examined facial emotion perception in 45 verbal adults with autism spectrum…
Rosenthal, M Zachary; Kim, Kwanguk; Herr, Nathaniel R; Smoski, Moria J; Cheavens, Jennifer S; Lynch, Thomas R; Kosson, David S
The aim of this preliminary study was to examine whether individuals with avoidant personality disorder (APD) could be characterized by deficits in the classification of dynamically presented facial emotional expressions. Using a community sample of adults with APD (n = 17) and non-APD controls (n = 16), speed and accuracy of facial emotional expression recognition was investigated in a task that morphs facial expressions from neutral to prototypical expressions (Multi-Morph Facial Affect Recognition Task; Blair, Colledge, Murray, & Mitchell, 2001). Results indicated that individuals with APD were significantly more likely than controls to make errors when classifying fully expressed fear. However, no differences were found between groups in the speed to correctly classify facial emotional expressions. The findings are some of the first to investigate facial emotional processing in a sample of individuals with APD and point to an underlying deficit in processing social cues that may be involved in the maintenance of APD.
Aan Het Rot, Marije; Enea, Violeta; Dafinoiu, Ion; Iancu, Sorina; Taftă, Steluţa A; Bărbuşelu, Mariana
While the recognition of emotional expressions has been extensively studied, the behavioural response to these expressions has not. In the interpersonal circumplex, behaviour is defined in terms of communion and agency. In this study, we examined behavioural responses to both facial and postural expressions of emotion. We presented 101 Romanian students with facial and postural stimuli involving individuals ('targets') expressing happiness, sadness, anger, or fear. Using an interpersonal grid, participants simultaneously indicated how communal (i.e., quarrelsome or agreeable) and agentic (i.e., dominant or submissive) they would be towards people displaying these expressions. Participants were agreeable-dominant towards targets showing happy facial expressions and primarily quarrelsome towards targets with angry or fearful facial expressions. Responses to targets showing sad facial expressions were neutral on both dimensions of interpersonal behaviour. Postural versus facial expressions of happiness and anger elicited similar behavioural responses. Participants responded in a quarrelsome-submissive way to fearful postural expressions and in an agreeable way to sad postural expressions. Behavioural responses to the various facial expressions were largely comparable to those previously observed in Dutch students. Observed differences may be explained from participants' cultural background. Responses to the postural expressions largely matched responses to the facial expressions. © 2017 The British Psychological Society.
AboEllail, Mohamed Ahmed Mostafa; Kanenishi, Kenji; Mori, Nobuhiro; Mohamed, Osman Abdel Kareem; Hata, Toshiyuki
To evaluate the frequencies of fetal facial expressions in the third trimester of pregnancy, when fetal brain maturation and development are progressing in normal healthy fetuses. Four-dimensional (4 D) ultrasound was used to examine the facial expressions of 111 healthy fetuses between 30 and 40 weeks of gestation. The frequencies of seven facial expressions (mouthing, yawning, smiling, tongue expulsion, scowling, sucking, and blinking) during 15-minute recordings were assessed. The fetuses were further divided into three gestational age groups (25 fetuses at 30-31 weeks, 43 at 32-35 weeks, and 43 at ≥36 weeks). Comparison of facial expressions among the three gestational age groups was performed to determine their changes with advancing gestation. Mouthing was the most frequent facial expression at 30-40 weeks of gestation, followed by blinking. Both facial expressions were significantly more frequent than the other expressions (p facial expressions did not change between 30 and 40 weeks. The frequency of yawning at 30-31 weeks was significantly higher than that at 36-40 weeks (p facial expressions among the three gestational age groups. Our results suggest that 4D ultrasound assessment of fetal facial expressions may be a useful modality for evaluating fetal brain maturation and development. The decreasing frequency of fetal yawning after 30 weeks of gestation may explain the emergence of distinct states of arousal.
Kalliatakis, Grigorios; Vidakis, Nikolaos; Triantafyllidis, Georgios
Despite significant recent advances in the field of head pose estimation and facial expression recognition, raising the cognitive level when analysing human activity presents serious challenges to current concepts. Motivated by the need of generating comprehensible visual representations from...... and accurately estimate head pose changes in unconstrained environment. In order to complete the secondary process of recognising four universal dominant facial expressions (happiness, anger, sadness and surprise), emotion recognition via facial expressions (ERFE) was adopted. After that, a lightweight data...
Wu, Yao; Qiu, Weigen
In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.
Uono, Shota; Sato, Wataru; Toichi, Motomi
Emotional facial expressions are indispensable communicative tools, and social interactions involving facial expressions are impaired in some psychiatric disorders. Recent studies revealed that the perception of dynamic facial expressions was exaggerated in normal participants, and this exaggerated perception is weakened in autism spectrum disorder (ASD). Based on the notion that ASD and schizophrenia spectrum disorder are at two extremes of the continuum with respect to social impairment, we hypothesized that schizophrenic characteristics would strengthen the exaggerated perception of dynamic facial expressions. To test this hypothesis, we investigated the relationship between the perception of facial expressions and schizotypal traits in a normal population. We presented dynamic and static facial expressions, and asked participants to change an emotional face display to match the perceived final image. The presence of schizotypal traits was positively correlated with the degree of exaggeration for dynamic, as well as static, facial expressions. Among its subscales, the paranoia trait was positively correlated with the exaggerated perception of facial expressions. These results suggest that schizotypal traits, specifically the tendency to over-attribute mental states to others, exaggerate the perception of emotional facial expressions.
Bayard, Sophie; Croisier Langenier, Muriel; Dauvilliers, Yves
Cataplexy is pathognomonic of narcolepsy with cataplexy, and defined by a transient loss of muscle tone triggered by strong emotions. Recent researches suggest abnormal amygdala function in narcolepsy with cataplexy. Emotion treatment and emotional regulation strategies are complex functions involving cortical and limbic structures, like the amygdala. As the amygdala has been shown to play a role in facial emotion recognition, we tested the hypothesis that patients with narcolepsy with cataplexy would have impaired recognition of facial emotional expressions compared with patients affected with central hypersomnia without cataplexy and healthy controls. We also aimed to determine whether cataplexy modulates emotional regulation strategies. Emotional intensity, arousal and valence ratings on Ekman faces displaying happiness, surprise, fear, anger, disgust, sadness and neutral expressions of 21 drug-free patients with narcolepsy with cataplexy were compared with 23 drug-free sex-, age- and intellectual level-matched adult patients with hypersomnia without cataplexy and 21 healthy controls. All participants underwent polysomnography recording and multiple sleep latency tests, and completed depression, anxiety and emotional regulation questionnaires. Performance of patients with narcolepsy with cataplexy did not differ from patients with hypersomnia without cataplexy or healthy controls on both intensity rating of each emotion on its prototypical label and mean ratings for valence and arousal. Moreover, patients with narcolepsy with cataplexy did not use different emotional regulation strategies. The level of depressive and anxious symptoms in narcolepsy with cataplexy did not differ from the other groups. Our results demonstrate that narcolepsy with cataplexy accurately perceives and discriminates facial emotions, and regulates emotions normally. The absence of alteration of perceived affective valence remains a major clinical interest in narcolepsy with cataplexy
Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.
Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja
Facial-expression data often appear in multiple views either due to head-movements or the camera position. Existing methods for multi-view facial expression recognition perform classification of the target expressions either by using classifiers learned separately for each view or by using a single
Ruba, Ashley L.; Johnson, Kristin M.; Harris, Lasana T.; Wilbourn, Makeba Parramore
For decades, scholars have examined how children first recognize emotional facial expressions. This research has found that infants younger than 10 months can discriminate negative, within-valence facial expressions in looking time tasks, and children older than 24 months struggle to categorize these expressions in labeling and free-sort tasks.…
Ueda, Yoshiyuki; Nagoya, Kie; Yoshikawa, Sakiko; Nomura, Michio
Forming specific facial expressions influences emotions and perception. Bearing this in mind, studies should be reconsidered in which observers expressing neutral emotions inferred personal traits from the facial expressions of others. In the present study, participants were asked to make happy, neutral, and disgusted facial expressions: for "happy," they held a wooden chopstick in their molars to form a smile; for "neutral," they clasped the chopstick between their lips, making no expression; for "disgusted," they put the chopstick between their upper lip and nose and knit their brows in a scowl. However, they were not asked to intentionally change their emotional state. Observers judged happy expression images as more trustworthy, competent, warm, friendly, and distinctive than disgusted expression images, regardless of the observers' own facial expression. Observers judged disgusted expression images as more dominant than happy expression images. However, observers expressing disgust overestimated dominance in observed disgusted expression images and underestimated dominance in happy expression images. In contrast, observers with happy facial forms attenuated dominance for disgusted expression images. These results suggest that dominance inferred from facial expressions is unstable and influenced by not only the observed facial expression, but also the observers' own physiological states.
Garrett, Amy S; Carrion, Victor; Kletter, Hilit; Karchemskiy, Asya; Weems, Carl F; Reiss, Allan
This study examined activation to facial expressions in youth with a history of interpersonal trauma and current posttraumatic stress symptoms (PTSS) compared to healthy controls (HC). Twenty-three medication-naive youth with PTSS and 23 age- and gender-matched HC underwent functional magnetic resonance imaging (fMRI) while viewing fearful, angry, sad, happy, and neutral faces. Data were analyzed for group differences in location of activation, as well as timing of activation during the early versus late phase of the block. Using SPM5, significant activation (P effect of group was identified. Activation from selected clusters was extracted to SPSS software for further analysis of specific facial expressions and temporal patterns of activation. The PTSS group showed significantly greater activation than controls in several regions, including the amygdala/hippocampus, medial prefrontal cortex, insula, and ventrolateral prefrontal cortex, and less activation than controls in the dorsolateral prefrontal cortex (DLPFC). These group differences in activation were greatest during angry, happy, and neutral faces, and predominantly during the early phase of the block. Post hoc analyses showed significant Group × Phase interactions in the right amygdala and left hippocampus. Traumatic stress may impact development of brain regions important for emotion processing. Timing of activation may be altered in youth with PTSS. © 2012 Wiley Periodicals, Inc.
Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J
Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.
Widen, Sherri C; Naab, Pamela
Do people always interpret a facial expression as communicating a single emotion (e.g., the anger face as only angry) or is that interpretation malleable? The current study investigated preschoolers' (N = 60; 3-4 years) and adults' (N = 20) categorization of facial expressions. On each of five trials, participants selected from an array of 10 facial expressions (an open-mouthed, high arousal expression and a closed-mouthed, low arousal expression each for happiness, sadness, anger, fear, and disgust) all those that displayed the target emotion. Children's interpretation of facial expressions was malleable: 48% of children who selected the fear, anger, sadness, and disgust faces for the "correct" category also selected these same faces for another emotion category; 47% of adults did so for the sadness and disgust faces. The emotion children and adults attribute to facial expressions is influenced by the emotion category for which they are looking.
Artuso, Caterina; Palladino, Paola; Ricciardelli, Paola
The aim of the study was to investigate how the biological binding between different facial dimensions, and their social and communicative relevance, may impact updating processes in working memory (WM). We focused on WM updating because it plays a key role in ongoing processing. Gaze direction and facial expression are crucial and changeable components of face processing. Direct gaze enhances the processing of approach-oriented facial emotional expressions (e.g., joy), while averted gaze enh...
Bologna, Matteo; Berardelli, Isabella; Paparella, Giulia; Marsili, Luca; Ricciardi, Lucia; Fabbrini, Giovanni; Berardelli, Alfredo
Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson's disease (PD). However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Eighteen patients with PD and 16 healthy controls were enrolled in this study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analyzed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients were evaluated using the Spearman's test and multiple regression analysis. The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all P s facial expression kinematics and emotion recognition deficits were unrelated in patients (all P s > 0.05). Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients (all P s > 0.05). The results in this study provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.
Marsh, Penny; Beauchaine, Theodore P.; Williams, Bailey
Although deficiencies in emotional responding have been linked to externalizing behaviors in children, little is known about how discrete response systems (e.g., expressive, physiological) are coordinated during emotional challenge among these youth. We examined time-linked correspondence of sad facial expressions and autonomic reactivity during an empathy-eliciting task among boys with disruptive behavior disorders (n = 31) and controls (n = 23). For controls, sad facial expressions were associated with reduced sympathetic (lower skin conductance level, lengthened cardiac preejection period [PEP]) and increased parasympathetic (higher respiratory sinus arrhythmia [RSA]) activity. In contrast, no correspondence between facial expressions and autonomic reactivity was observed among boys with conduct problems. Furthermore, low correspondence between facial expressions and PEP predicted externalizing symptom severity, whereas low correspondence between facial expressions and RSA predicted internalizing symptom severity. PMID:17868261
Kluczniok, Dorothea; Hindi Attar, Catherine; Stein, Jenny; Poppinga, Sina; Fydrich, Thomas; Jaite, Charlotte; Kappel, Viola; Brunner, Romuald; Herpertz, Sabine C; Boedeker, Katja; Bermpohl, Felix
Maternal sensitive behavior depends on recognizing one's own child's affective states. The present study investigated distinct and overlapping neural responses of mothers to sad and happy facial expressions of their own child (in comparison to facial expressions of an unfamiliar child). We used functional MRI to measure dissociable and overlapping activation patterns in 27 healthy mothers in response to happy, neutral and sad facial expressions of their own school-aged child and a gender- and age-matched unfamiliar child. To investigate differential activation to sad compared to happy faces of one's own child, we used interaction contrasts. During the scan, mothers had to indicate the affect of the presented face. After scanning, they were asked to rate the perceived emotional arousal and valence levels for each face using a 7-point Likert-scale (adapted SAM version). While viewing their own child's sad faces, mothers showed activation in the amygdala and anterior cingulate cortex whereas happy facial expressions of the own child elicited activation in the hippocampus. Conjoint activation in response to one's own child happy and sad expressions was found in the insula and the superior temporal gyrus. Maternal brain activations differed depending on the child's affective state. Sad faces of the own child activated areas commonly associated with a threat detection network, whereas happy faces activated reward related brain areas. Overlapping activation was found in empathy related networks. These distinct neural activation patterns might facilitate sensitive maternal behavior.
Khan, Masood Mehmood; Ward, Robert D.; Ingleby, Michael
The ability to distinguish feigned from involuntary expressions of emotions could help in the investigation and treatment of neuropsychiatric and affective disorders and in the detection of malingering. This work investigates differences in emotion-specific patterns of thermal variations along the major facial muscles. Using experimental data extracted from 156 images, we attempted to classify patterns of emotion-specific thermal variations into neutral, and voluntary and involuntary expressions of positive and negative emotive states. Initial results suggest (i) each facial muscle exhibits a unique thermal response to various emotive states; (ii) the pattern of thermal variances along the facial muscles may assist in classifying voluntary and involuntary facial expressions; and (iii) facial skin temperature measurements along the major facial muscles may be used in automated emotion assessment.
Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo; Kawai, Nobuyuki
Background A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. Methodology/Principal Findings In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. Conclusions/Significance The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the traditionally
Lin, I-Mei; Fan, Sheng-Yu; Huang, Tiao-Lai; Wu, Wan-Ting; Li, Shi-Ming
Visual search is an important attention process that precedes the information processing. Visual search also mediates the relationship between cognition function (attention) and social cognition (such as facial expression identification). However, the association between visual attention and social cognition in patients with schizophrenia remains unknown. The purposes of this study were to examine the differences in visual search performance and facial expression identification between patients with schizophrenia and normal controls, and to explore the relationship between visual search performance and facial expression identification in patients with schizophrenia. Fourteen patients with schizophrenia (mean age=46.36±6.74) and 15 normal controls (mean age=40.87±9.33) participated this study. The visual search task, including feature search and conjunction search, and Japanese and Caucasian Facial Expression of Emotion were administered. Patients with schizophrenia had worse visual search performance both in feature search and conjunction search than normal controls, as well as had worse facial expression identification, especially in surprised and sadness. In addition, there were negative associations between visual search performance and facial expression identification in patients with schizophrenia, especially in surprised and sadness. However, this phenomenon was not showed in normal controls. Patients with schizophrenia who had visual search deficits had the impairment on facial expression identification. Increasing ability of visual search and facial expression identification may improve their social function and interpersonal relationship.
Truong, K.P.; Leeuwen, D.A. van; Neerincx, M.A.
Two unobtrusive modalities for automatic emotion recognition are discussed: speech and facial expressions. First, an overview is given of emotion recognition studies based on a combination of speech and facial expressions. We will identify difficulties concerning data collection, data fusion, system
Faso, Daniel J.; Sasson, Noah J.; Pinkham, Amy E.
Though many studies have examined facial affect perception by individuals with autism spectrum disorder (ASD), little research has investigated how facial expressivity in ASD is perceived by others. Here, naïve female observers (n = 38) judged the intensity, naturalness and emotional category of expressions produced by adults with ASD (n = 6) and…
Du, Shichuan; Martinez, Aleix M.
Emotions are sometimes revealed through facial expressions. When these natural facial articulations involve the contraction of the same muscle groups in people of distinct cultural upbringings, this is taken as evidence of a biological origin of these emotions. While past research had identified facial expressions associated with a single internally felt category (eg, the facial expression of happiness when we feel joyful), we have recently studied facial expressions observed when people experience compound emotions (eg, the facial expression of happy surprise when we feel joyful in a surprised way, as, for example, at a surprise birthday party). Our research has identified 17 compound expressions consistently produced across cultures, suggesting that the number of facial expressions of emotion of biological origin is much larger than previously believed. The present paper provides an overview of these findings and shows evidence supporting the view that spontaneous expressions are produced using the same facial articulations previously identified in laboratory experiments. We also discuss the implications of our results in the study of psychopathologies, and consider several open research questions. PMID:26869845
This study aims to examine the effectiveness of teaching naming emotional facial expression via video modeling to children with autism. Teaching the naming of emotions (happy, sad, scared, disgusted, surprised, feeling physical pain, and bored) was made by creating situations that lead to the emergence of facial expressions to children…
Lewinski, P.; Fransen, M.L.; Tan, E.S.H.
We present a psychophysiological study of facial expressions of happiness (FEH) produced by advertisements using the FaceReader system (Noldus, 2013) for automatic analysis of facial expressions of basic emotions (FEBE; Ekman, 1972). FaceReader scores were associated with self-reports of the
Du, Shichuan; Martinez, Aleix M
Emotions are sometimes revealed through facial expressions. When these natural facial articulations involve the contraction of the same muscle groups in people of distinct cultural upbringings, this is taken as evidence of a biological origin of these emotions. While past research had identified facial expressions associated with a single internally felt category (eg, the facial expression of happiness when we feel joyful), we have recently studied facial expressions observed when people experience compound emotions (eg, the facial expression of happy surprise when we feel joyful in a surprised way, as, for example, at a surprise birthday party). Our research has identified 17 compound expressions consistently produced across cultures, suggesting that the number of facial expressions of emotion of biological origin is much larger than previously believed. The present paper provides an overview of these findings and shows evidence supporting the view that spontaneous expressions are produced using the same facial articulations previously identified in laboratory experiments. We also discuss the implications of our results in the study of psychopathologies, and consider several open research questions.
van der Gaag, Christiaan; Minderaa, Ruud B.; Keysers, Christian
Facial expressions contain both motor and emotional components. The inferior frontal gyrus (IFG) and posterior parietal cortex have been considered to compose a mirror neuron system (MNS) for the motor components of facial expressions, while the amygdala and insula may represent an "additional" MNS
Gaspar, Augusta; Esteves, Francisco G.
Prototypical facial expressions of emotion, also known as universal facial expressions, are the underpinnings of most research concerning recognition of emotions in both adults and children. Data on natural occurrences of these prototypes in natural emotional contexts are rare and difficult to obtain in adults. By recording naturalistic…
Widen, Sherri C.; Russell, James A.
Past research has shown that children recognize emotions from facial expressions poorly and improve only gradually with age, but the stimuli in such studies have been static faces. Because dynamic faces include more information, it may well be that children more readily recognize emotions from dynamic facial expressions. The current study of…
Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki
This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…
Uono, Shota; Sato, Wataru; Toichi, Motomi
Individuals with pervasive developmental disorder (PDD) have difficulty with social communication via emotional facial expressions, but behavioral studies involving static images have reported inconsistent findings about emotion recognition. We investigated whether dynamic presentation of facial expression would enhance subjective perception of…
Gorbunov, R.; Barakova, E.I.; Rauterberg, G.W.M.
In this paper we demonstrate how genetic programming can be used to interpret time dependent facial expressions in terms of emotional stimuli of different types and intensities. In our analysis we have used video records of facial expressions made during the Mars-500 experiment in which six
van der Gaag, Christiaan; Minderaa, Ruud B.; Keysers, Christian
The amygdala has been considered to be essential for recognizing fear in other people's facial expressions. Recent studies shed doubt on this interpretation. Here we used movies of facial expressions instead of static photographs to investigate the putative fear selectivity of the amygdala using
Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu
Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent…
Boll, Sabrina; Gamer, Matthias
Deficits in emotional reactivity and recognition have been reported in psychopathy. Impaired attention to the eyes along with amygdala malfunctions may underlie these problems. Here, we investigated how different facets of psychopathy modulate the visual exploration of facial expressions by assessing personality traits in a sample of healthy young adults using an eye-tracking based face perception task. Fearless Dominance (the interpersonal-emotional facet of psychopathy) and Coldheartedness scores predicted reduced face exploration consistent with findings on lowered emotional reactivity in psychopathy. Moreover, participants high on the social deviance facet of psychopathy ('Self-Centered Impulsivity') showed a reduced bias to shift attention towards the eyes. Our data suggest that facets of psychopathy modulate face processing in healthy individuals and reveal possible attentional mechanisms which might be responsible for the severe impairments of social perception and behavior observed in psychopathy. Copyright © 2016 Elsevier B.V. All rights reserved.
I. S. Ivanova
Full Text Available The paper emphasizes the need for studying the subjective effectiveness criteria of interpersonal communication and importance of effective communication for personality development in adolescence. The problemof undeveloped representation of positive emotions in communication process is discussed. Both the identification and verbalization of emotions are regarded by the author as the basic communication skills. The experimental data regarding the longitude and age levels are described, the gender differences in identification and verbalization of emotions considered. The outcomes of experimental study demonstrate that the accuracy of facial emotional expressions of teenage boys and girls changes at different rates. The prospects of defining the age norms for identification and verbalization of emotions are analyzed.
Battini Sönmez, Elena; Cangelosi, Angelo
This paper considers the issue of fully automatic emotion classification on 2D faces. In spite of the great effort done in recent years, traditional machine learning approaches based on hand-crafted feature extraction followed by the classification stage failed to develop a real-time automatic facial expression recognition system. The proposed architecture uses Convolutional Neural Networks (CNN), which are built as a collection of interconnected processing elements to simulate the brain of human beings. The basic idea of CNNs is to learn a hierarchical representation of the input data, which results in a better classification performance. In this work we present a block-based CNN algorithm, which uses noise, as data augmentation technique, and builds batches with a balanced number of samples per class. The proposed architecture is a very simple yet powerful CNN, which can yield state-of-the-art accuracy on the very competitive benchmark algorithm of the Extended Cohn Kanade database.
Calbi, Marta; Heimann, Katrin; Barratt, Daniel
Facial expressions are of major importance to understanding the emotions, intentions, and mental states of others. Strikingly, so far most studies on the perception and comprehension of emotions have used isolated facial expressions as stimuli; for example, photographs of actors displaying facial...... expressions belonging to one of the so called ‘basic emotions’. However, our real experience during social interactions is different: facial expressions of emotion are mostly perceived in a wider context, constituted by body language, the surrounding environment, and our beliefs and expectations. Already...... in the early twentieth century, the Russian filmmaker Lev Kuleshov argued that such context could significantly change our interpretation of facial expressions. Prior experiments have shown behavioral effects pointing in this direction, but have only used static images portraying some basic emotions. In our...
LoBue, Vanessa; Thrasher, Cat
Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development-The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions-angry, fearful, sad, happy, surprised, and disgusted-and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.
Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun
We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.
Full Text Available Featured Application: The proposed Uncertainty Flow framework may benefit the facial analysis with its promised elevation in discriminability in multi-label affective classification tasks. Moreover, this framework also allows the efficient model training and between tasks knowledge transfer. The applications that rely heavily on continuous prediction on emotional valance, e.g., to monitor prisoners’ emotional stability in jail, can be directly benefited from our framework. Abstract: To lower the single-label dependency on affective facial analysis, it urges the fruition of multi-label affective learning. The impediment to practical implementation of existing multi-label algorithms pertains to scarcity of scalable multi-label training datasets. To resolve this, an inductive transfer learning based framework, i.e.,Uncertainty Flow, is put forward in this research to allow knowledge transfer from a single labelled emotion recognition task to a multi-label affective recognition task. I.e., the model uncertainty—which can be quantified in Uncertainty Flow—is distilled from a single-label learning task. The distilled model uncertainty ensures the later efficient zero-shot multi-label affective learning. On the theoretical perspective, within our proposed Uncertainty Flow framework, the feasibility of applying weakly informative priors, e.g., uniform and Cauchy prior, is fully explored in this research. More importantly, based on the derived weight uncertainty, three sets of prediction related uncertainty indexes, i.e., soft-max uncertainty, pure uncertainty and uncertainty plus are proposed to produce reliable and accurate multi-label predictions. Validated on our manual annotated evaluation dataset, i.e., the multi-label annotated FER2013, our proposed Uncertainty Flow in multi-label facial expression analysis exhibited superiority to conventional multi-label learning algorithms and multi-label compatible neural networks. The success of our
Full Text Available For several stimulus categories (e.g., pictures, odors and words, the arousal of both negative and positive stimuli has been shown to modulate amygdalar activation. In contrast, previous studies did not observe similar amygdalar effects in response to negative and positive facial expressions with varying intensity of facial expressions. Reasons for this discrepancy may be related to analytical strategies, experimental design and stimuli. Therefore, the present study aimed at re-investigating whether the intensity of facial expressions modulates amygdalar activation by circumventing limitations of previous research. Event-related functional magnetic resonance imaging (fMRI was used to assess brain activation while participants observed a static neutral expression and positive (happy and negative (angry expressions of either high or low intensity from an ecologically valid, novel stimulus set. The ratings of arousal and intensity were highly correlated. We found that amygdalar activation followed a u-shaped activation pattern with highest activation to high intense facial expressions as compared to low intensity facial expressions and to the neutral expression irrespective of valence, suggesting a critical role of the amygdala in valence-independent arousal processing of facial expressions. Additionally, consistent with previous studies, intensity effects were also found in visual areas and generally increased activation to angry vs. happy faces were found in visual cortex and insula, indicating enhanced visual representations of high arousing facial expressions and increased visual and somatosensory representations of threat.
Full Text Available Traditional emotion theories stress the importance of the face in the expression of emotions but bodily expressions are becoming increasingly important. Here we tested the hypothesis that similar physiological responses can be evoked by observing emotional face and body signals and that the reaction to angry signals is amplified in anxious individuals. We designed three experiments in which participants categorized emotional expressions from isolated facial and bodily expressions and from emotionally congruent and incongruent face-body compounds. Participants’ fixations were measured and their pupil size recorded with eye-tracking equipment, and their facial reactions measured with electromyography (EMG. The behavioral results support our prediction that the recognition of a facial expression is improved in the context of a matching posture and importantly, also vice versa. From their facial expression, it appeared that observers acted with signs of negative emotionality (increased corrugator activity to angry and fearful facial expressions and with positive emotionality (increased zygomaticus to happy facial expressions. What we predicted and found, was that angry and fearful cues from the face or the body, attracted more attention than happy cues. We further observed that responses evoked by angry cues were amplified in individuals with high anxiety scores. In sum, we show that people process bodily expressions of emotion in a similar fashion as facial expressions and that the congruency between the emotional signals from the face and body ameliorates the recognition of the emotion.
Liu, C.; Ham, J.R.C.; Postma, E.O.; Midden, C.J.H.; Joosten, B.; Goudbeek, M.; Ge, S.S.; Khatib, O.
Abstract. To design robots or embodied conversational agents that can accurately display facial expressions indicating an emotional state, we need technology to produce those facial expressions, and research that investigates the relationship between those technologies and human social perception of
Wingenbach, Tanja S H; Ashwin, Chris; Brosnan, Mark
There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or 'extreme' examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations.
There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or ‘extreme’ examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations. PMID:29293674
Tanja S H Wingenbach
Full Text Available There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or 'extreme' examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates and response latencies for emotion recognition using short video stimuli (1sec of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral across three variations in the intensity of the emotional expression (low, intermediate, high in an adolescent and adult sample (N = 111; 51 male, 60 female aged between 16 and 45 (M = 22.2, SD = 5.7. Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations.
Full Text Available Abstract— In daily lives, especially in interpersonal communication, face often used for expression. Facial expressions give information about the emotional state of the person. A facial expression is one of the behavioral characteristics. The components of a basic facial expression analysis system are face detection, face data extraction, and facial expression recognition. Fisherface method with backpropagation artificial neural network approach can be used for facial expression recognition. This method consists of two-stage process, namely PCA and LDA. PCA is used to reduce the dimension, while the LDA is used for features extraction of facial expressions. The system was tested with 2 databases namely JAFFE database and MUG database. The system correctly classified the expression with accuracy of 86.85%, and false positive 25 for image type I of JAFFE, for image type II of JAFFE 89.20% and false positive 15, for type III of JAFFE 87.79%, and false positive for 16. The image of MUG are 98.09%, and false positive 5. Keywords— facial expression, fisherface method, PCA, LDA, backpropagation neural network.
Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz
Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.
Bologna, Matteo; Berardelli, Isabella; Paparella, Giulia; Marsili, Luca; Ricciardi, Lucia; Fabbrini, Giovanni; Berardelli, Alfredo
Background Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson?s disease (PD). However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. Objective To investigate possible deficits in facial emotion expression and emotion recognition and their...
Deschamps, P. K. H.; Coppes, L.; Kenemans, J. L.; Schutter, D. J. L. G.; Matthys, W.
This study aimed to examine facial mimicry in 6-7 year old children with autism spectrum disorder (ASD) and to explore whether facial mimicry was related to the severity of impairment in social responsiveness. Facial electromyographic activity in response to angry, fearful, sad and happy facial expressions was recorded in twenty 6-7 year old…
Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji
Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.
Kosonogov, Vladimir; Titova, Alisa; Vorobyeva, Elena
The current study addressed the hypothesis that empathy and the restriction of facial muscles of observers can influence recognition of emotional facial expressions. A sample of 74 participants recognized the subjective onset of emotional facial expressions (anger, disgust, fear, happiness, sadness, surprise, and neutral) in a series of morphed face photographs showing a gradual change (frame by frame) from one expression to another. The high-empathy (as measured by the Empathy Quotient) participants recognized emotional facial expressions at earlier photographs from the series than did low-empathy ones, but there was no difference in the exploration time. Restriction of facial muscles of observers (with plasters and a stick in mouth) did not influence the responses. We discuss these findings in the context of the embodied simulation theory and previous data on empathy.
Full Text Available Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development—The Child Affective Facial Expression set (CAFE. The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for 6 emotional facial expressions—angry, fearful, sad, happy, surprised, and disgusted—and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.
Chang, Feng-Ju; Tran, Anh Tuan; Hassner, Tal; Masi, Iacopo; Nevatia, Ram; Medioni, Gerard
We describe a deep learning based method for estimating 3D facial expression coefficients. Unlike previous work, our process does not relay on facial landmark detection methods as a proxy step. Recent methods have shown that a CNN can be trained to regress accurate and discriminative 3D morphable model (3DMM) representations, directly from image intensities. By foregoing facial landmark detection, these methods were able to estimate shapes for occluded faces appearing in unprecedented in-the-...
Katalin M Gothard
Full Text Available Facial expressions reflect decisions about the perceived meaning of social stimuli emitted by others and the expected socio-emotional outcome of the reciprocating expression. The decision to produce a facial expression emerges from the joint activity of a network of structures that include the amygdala and multiple, interconnected cortical and subcortical motor areas. Reciprocal transformations between sensory and motor signals give rise to distinct brain states that promote, or impede the production of facial expressions. The muscles of the upper and lower face are controlled by anatomically distinct motor areas and thus require distinct patterns of motor commands. Concomitantly multiple areas, including the amygdala, monitor the ongoing overt behavior (the expression of self and the covert, autonomic responses that accompany emotional expressions. Interoceptive signals and visceral states, therefore, should be incorporated into the formalisms of decision making in order account for decisions that govern the receiving-emitting cycle of facial expressions.
Full Text Available Previous studies have demonstrated that the serotonin transporter gene-linked polymorphic region (5-HTTLPR affects the recognition of facial expressions and attention to them. However, the relationship between 5-HTTLPR and the perceptual detection of others' facial expressions, the process which takes place prior to emotional labeling (i.e., recognition, is not clear. To examine whether the perceptual detection of emotional facial expressions is influenced by the allelic variation (short/long of 5-HTTLPR, happy and sad facial expressions were presented at weak and mid intensities (25% and 50%. Ninety-eight participants, genotyped for 5-HTTLPR, judged whether emotion in images of faces was present. Participants with short alleles showed higher sensitivity (d' to happy than to sad expressions, while participants with long allele(s showed no such positivity advantage. This effect of 5-HTTLPR was found at different facial expression intensities among males and females. The results suggest that at the perceptual stage, a short allele enhances the processing of positive facial expressions rather than that of negative facial expressions.
Matsuda, Soichiro; Minagawa, Yasuyo; Yamamoto, Junichi
Atypical gaze behavior in response to a face has been well documented in individuals with autism spectrum disorders (ASDs). Children with ASD appear to differ from typically developing (TD) children in gaze behavior for spoken and dynamic face stimuli but not for nonspeaking, static face stimuli. Furthermore, children with ASD and TD children show a difference in their gaze behavior for certain expressions. However, few studies have examined the relationship between autism severity and gaze behavior toward certain facial expressions. The present study replicated and extended previous studies by examining gaze behavior towards pictures of facial expressions. We presented ASD and TD children with pictures of surprised, happy, neutral, angry, and sad facial expressions. Autism severity was assessed using the Childhood Autism Rating Scale (CARS). The results showed that there was no group difference in gaze behavior when looking at pictures of facial expressions. Conversely, the children with ASD who had more severe autistic symptomatology had a tendency to gaze at angry facial expressions for a shorter duration in comparison to other facial expressions. These findings suggest that autism severity should be considered when examining atypical responses to certain facial expressions.
Saito, Hirotaka; Ando, Akinobu; Itagaki, Shota; Kawada, Taku; Davis, Darold; Nagai, Nobuyuki
Until now, when practicing facial expression recognition skills in nonverbal communication areas of SST, judgment of facial expression was not quantitative because the subjects of SST were judged by teachers. Therefore, we thought whether SST could be performed using facial expression detection devices that can quantitatively measure facial…
Mohammed Hazim Alkawaz
Full Text Available Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry and blushing (anger and happiness is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.
Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan
Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.
Zhang, Heming; Chen, Xuhai; Chen, Shengdong; Li, Yansong; Chen, Changming; Long, Quanshan; Yuan, Jiajin
Facial and vocal expressions are essential modalities mediating the perception of emotion and social communication. Nonetheless, currently little is known about how emotion perception and its neural substrates differ across facial expression and vocal prosody. To clarify this issue, functional MRI scans were acquired in Study 1, in which participants were asked to discriminate the valence of emotional expression (angry, happy or neutral) from facial, vocal, or bimodal stimuli. In Study 2, we used an affective priming task (unimodal materials as primers and bimodal materials as target) and participants were asked to rate the intensity, valence, and arousal of the targets. Study 1 showed higher accuracy and shorter response latencies in the facial than in the vocal modality for a happy expression. Whole-brain analysis showed enhanced activation during facial compared to vocal emotions in the inferior temporal-occipital regions. Region of interest analysis showed a higher percentage signal change for facial than for vocal anger in the superior temporal sulcus. Study 2 showed that facial relative to vocal priming of anger had a greater influence on perceived emotion for bimodal targets, irrespective of the target valence. These findings suggest that facial expression is associated with enhanced emotion perception compared to equivalent vocal prosodies.
Yan, Xiaoqian; Young, Andrew W; Andrews, Timothy J
The aim of this study was to investigate the causes of the own-race advantage in facial expression perception. In Experiment 1, we investigated Western Caucasian and Chinese participants' perception and categorization of facial expressions of six basic emotions that included two pairs of confusable expressions (fear and surprise; anger and disgust). People were slightly better at identifying facial expressions posed by own-race members (mainly in anger and disgust). In Experiment 2, we asked whether the own-race advantage was due to differences in the holistic processing of facial expressions. Participants viewed composite faces in which the upper part of one expression was combined with the lower part of a different expression. The upper and lower parts of the composite faces were either aligned or misaligned. Both Chinese and Caucasian participants were better at identifying the facial expressions from the misaligned images, showing interference on recognizing the parts of the expressions created by holistic perception of the aligned composite images. However, this interference from holistic processing was equivalent across expressions of own-race and other-race faces in both groups of participants. Whilst the own-race advantage in recognizing facial expressions does seem to reflect the confusability of certain emotions, it cannot be explained by differences in holistic processing.
Kniazev, G G; Mitrofanova, L G; Bocharov, A V
Emotional intelligence-related differences in oscillatory responses to emotional facial expressions were investigated in 48 subjects (26 men and 22 women) in age 18-30 years. Participants were instructed to evaluate emotional expression (angry, happy and neutral) of each presented face on an analog scale ranging from -100 (very hostile) to + 100 (very friendly). High emotional intelligence (EI) participants were found to be more sensitive to the emotional content of the stimuli. It showed up both in their subjective evaluation of the stimuli and in a stronger EEG theta synchronization at an earlier (between 100 and 500 ms after face presentation) processing stage. Source localization using sLORETA showed that this effect was localized in the fusiform gyrus upon the presentation of angry faces and in the posterior cingulate gyrus upon the presentation of happy faces. At a later processing stage (500-870 ms) event-related theta synchronization in high emotional intelligence subject was higher in the left prefrontal cortex upon the presentation of happy faces, but it was lower in the anterior cingulate cortex upon presentation of angry faces. This suggests the existence of a mechanism that can be selectively increase the positive emotions and reduce negative emotions.
Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia
The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.
Dapelo, Marcela Marin; Bodas, Sergio; Morris, Robin; Tchanturia, Kate
People with eating disorders have difficulties in socio emotional functioning that could contribute to maintaining the functional consequences of the disorder. This study aimed to explore the ability to deliberately generate (i.e., pose) and imitate facial expressions of emotions in women with anorexia (AN) and bulimia nervosa (BN), compared to healthy controls (HC). One hundred and three participants (36 AN, 25 BN, and 42 HC) were asked to pose and imitate facial expressions of anger, disgust, fear, happiness, and sadness. Their facial expressions were recorded and coded. Participants with eating disorders (both AN and BN) were less accurate than HC when posing facial expressions of emotions. Participants with AN were less accurate compared to HC imitating facial expressions, whilst BN participants had a middle range performance. All results remained significant after controlling for anxiety, depression and autistic features. The relatively small number of BN participants recruited for this study. The study findings suggest that people with eating disorders, particularly those with AN, have difficulties posing and imitating facial expressions of emotions. These difficulties could have an impact in social communication and social functioning. This is the first study to investigate the ability to pose and imitate facial expressions of emotions in people with eating disorders, and the findings suggest this area should be further explored in future studies. Copyright © 2015. Published by Elsevier B.V.
Shepherd, Stephen V.; Lanzilotto, Marco; Ghazanfar, Asif A.
Evolutionary hypotheses regarding the origins of communication signals generally, and primate orofacial communication signals in particular, suggest that these signals derive by ritualization of noncommunicative behaviors, notably including ingestive behaviors such as chewing and nursing. These theories are appealing in part because of the prominent periodicities in both types of behavior. Despite their intuitive appeal, however, there are little or no data with which to evaluate these theories because the coordination of muscles innervated by the facial nucleus has not been carefully compared between communicative and ingestive movements. Such data are especially crucial for reconciling neurophysiological assumptions regarding facial motor control in communication and ingestion. We here address this gap by contrasting the coordination of facial muscles during different types of rhythmic orofacial behavior in macaque monkeys, finding that the perioral muscles innervated by the facial nucleus are rhythmically coordinated during lipsmacks and that this coordination appears distinct from that observed during ingestion. PMID:22553017
Namba, Shushi; Kabir, Russell S.; Miyatani, Makoto; Nakao, Takashi
Accurately gauging the emotional experience of another person is important for navigating interpersonal interactions. This study investigated whether perceivers are capable of distinguishing between unintentionally expressed (genuine) and intentionally manipulated (posed) facial expressions attributed to four major emotions: amusement, disgust, sadness, and surprise. Sensitivity to this discrimination was explored by comparing unstaged dynamic and static facial stimuli and analyzing the results with signal detection theory. Participants indicated whether facial stimuli presented on a screen depicted a person showing a given emotion and whether that person was feeling a given emotion. The results showed that genuine displays were evaluated more as felt expressions than posed displays for all target emotions presented. In addition, sensitivity to the perception of emotional experience, or discriminability, was enhanced in dynamic facial displays, but was less pronounced in the case of static displays. This finding indicates that dynamic information in facial displays contributes to the ability to accurately infer the emotional experiences of another person. PMID:29896135
Full Text Available There exists a stereotype that women are more expressive than men; however, research has almost exclusively focused on a single facial behavior, smiling. A large-scale study examines whether women are consistently more expressive than men or whether the effects are dependent on the emotion expressed. Studies of gender differences in expressivity have been somewhat restricted to data collected in lab settings or which required labor-intensive manual coding. In the present study, we analyze gender differences in facial behaviors as over 2,000 viewers watch a set of video advertisements in their home environments. The facial responses were recorded using participants' own webcams. Using a new automated facial coding technology we coded facial activity. We find that women are not universally more expressive across all facial actions. Nor are they more expressive in all positive valence actions and less expressive in all negative valence actions. It appears that generally women express actions more frequently than men, and in particular express more positive valence actions. However, expressiveness is not greater in women for all negative valence actions and is dependent on the discrete emotional state.
Rudovic, Ognjen; Pavlovic, Vladimir; Pantic, Maja
Automated facial expression recognition has received increased attention over the past two decades. Existing works in the field usually do not encode either the temporal evolution or the intensity of the observed facial displays. They also fail to jointly model multidimensional (multi-class)
Chickerur, Satyadhyan; Joshi, Kartik
Emotion detection using facial images is a technique that researchers have been using for the last two decades to try to analyze a person's emotional state given his/her image. Detection of various kinds of emotion using facial expressions of students in educational environment is useful in providing insight into the effectiveness of tutoring…
Oosterman, Joukje M; Zwakhalen, Sandra; Sampson, Elizabeth L; Kunz, Miriam
Facial expressions convey reliable nonverbal signals about pain and thus are very useful for assessing pain in patients with limited communicative ability, such as patients with dementia. In this review, we present an overview of the available pain observation tools and how they make use of facial
Marsh, Penny; Beauchaine, Theodore P.; Williams, Bailey
Although deficiencies in emotional responding have been linked to externalizing behaviors in children, little is known about how discrete response systems (e.g., expressive, physiological) are coordinated during emotional challenge among these youth. We examined time-linked correspondence of sad facial expressions and autonomic reactivity during an empathy-eliciting task among boys with disruptive behavior disorders (n = 31) and controls (n = 23). For controls, sad facial expressions were ass...
Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G
Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of "biologically basic to socially specific" information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four. Copyright © 2014 Elsevier Ltd. All rights reserved.
Full Text Available Previous studies have shown that early posterior components of event-related potentials (ERPs are modulated by facial expressions. The goal of the current study was to investigate individual differences in the recognition of facial expressions by examining the relationship between ERP components and the discrimination of facial expressions. Pictures of 3 facial expressions (angry, happy, and neutral were presented to 36 young adults during ERP recording. Participants were asked to respond with a button press as soon as they recognized the expression depicted. A multiple regression analysis, where ERP components were set as predictor variables, assessed hits and reaction times in response to the facial expressions as dependent variables. The N170 amplitudes significantly predicted for accuracy of angry and happy expressions, and the N170 latencies were predictive for accuracy of neutral expressions. The P2 amplitudes significantly predicted reaction time. The P2 latencies significantly predicted reaction times only for neutral faces. These results suggest that individual differences in the recognition of facial expressions emerge from early components in visual processing.
Tamamiya, Yoshiyuki; Hiraki, Kazuo
Previous studies have shown that early posterior components of event-related potentials (ERPs) are modulated by facial expressions. The goal of the current study was to investigate individual differences in the recognition of facial expressions by examining the relationship between ERP components and the discrimination of facial expressions. Pictures of 3 facial expressions (angry, happy, and neutral) were presented to 36 young adults during ERP recording. Participants were asked to respond with a button press as soon as they recognized the expression depicted. A multiple regression analysis, where ERP components were set as predictor variables, assessed hits and reaction times in response to the facial expressions as dependent variables. The N170 amplitudes significantly predicted for accuracy of angry and happy expressions, and the N170 latencies were predictive for accuracy of neutral expressions. The P2 amplitudes significantly predicted reaction time. The P2 latencies significantly predicted reaction times only for neutral faces. These results suggest that individual differences in the recognition of facial expressions emerge from early components in visual processing.
Hartley, Alan A; Ravich, Zoe; Stringer, Sarah; Wiley, Katherine
Memory for both facial emotional expression and facial identity was explored in younger and older adults in 3 experiments using a delayed match-to-sample procedure. Memory sets of 1, 2, or 3 faces were presented, which were followed by a probe after a 3-s retention interval. There was very little difference between younger and older adults in memory for emotional expressions, but memory for identity was substantially impaired in the older adults. Possible explanations for spared memory for emotional expressions include socioemotional selectivity theory as well as the existence of overlapping yet distinct brain networks for processing of different emotions. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: email@example.com.
This paper deals with the automatic identification of emotions from the manual annotations of the shape and functions of facial expressions in a Danish corpus of video recorded naturally occurring first encounters. More specifically, a support vector classified is trained on the corpus annotations...... to identify emotions in facial expressions. In the classification experiments, we test to what extent emotions expressed in naturally-occurring conversations can be identified automatically by a classifier trained on the manual annotations of the shape of facial expressions and co-occurring speech tokens. We...... also investigate the relation between emotions and the communicative functions of facial expressions. Both emotion labels and their values in a three dimensional space are identified. The three dimensions are Pleasure, Arousal and Dominance. The results of our experiments indicate that the classifiers...
da Silva, Flávio Altinier Maximiano; Pedrini, Helio
Facial expressions are an important demonstration of humanity's humors and emotions. Algorithms capable of recognizing facial expressions and associating them with emotions were developed and employed to compare the expressions that different cultural groups use to show their emotions. Static pictures of predominantly occidental and oriental subjects from public datasets were used to train machine learning algorithms, whereas local binary patterns, histogram of oriented gradients (HOGs), and Gabor filters were employed to describe the facial expressions for six different basic emotions. The most consistent combination, formed by the association of HOG filter and support vector machines, was then used to classify the other cultural group: there was a strong drop in accuracy, meaning that the subtle differences of facial expressions of each culture affected the classifier performance. Finally, a classifier was trained with images from both occidental and oriental subjects and its accuracy was higher on multicultural data, evidencing the need of a multicultural training set to build an efficient classifier.
Zhang, Lili; Wang, Haibo; Fan, Zhaomin; Han, Yuechen; Xu, Lei; Zhang, Haiyan
To study the changes in facial nerve function, morphology and neurotrophic factor III (NT-3) expression following three types of facial nerve injury. Changes in facial nerve function (in terms of blink reflex (BF), vibrissae movement (VM) and position of nasal tip) were assessed in 45 rats in response to three types of facial nerve injury: partial section of the extratemporal segment (group one), partial section of the facial canal segment (group two) and complete transection of the facial canal segment lesion (group three). All facial nerves specimen were then cut into two parts at the site of the lesion after being taken from the lesion site on 1st, 7th, 21st post-surgery-days (PSD). Changes of morphology and NT-3 expression were evaluated using the improved trichrome stain and immunohistochemistry techniques ,respectively. Changes in facial nerve function: In group 1, all animals had no blink reflex (BF) and weak vibrissae movement (VM) at the 1st PSD; The blink reflex in 80% of the rats recovered partly and the vibrissae movement in 40% of the rats returned to normal at the 7th PSD; The facial nerve function in 600 of the rats was almost normal at the 21st PSD. In group 2, all left facial nerve paralyzed at the 1st PSD; The blink reflex partly recovered in 40% of the rats and the vibrissae movement was weak in 80% of the rats at the 7th PSD; 8000 of the rats'BF were almost normal and 40% of the rats' VM completely recovered at the 21st PSD. In group 3, The recovery couldn't happen at anytime. Changes in morphology: In group 1, the size of nerve fiber differed in facial canal segment and some of myelin sheath and axons degenerated at the 7th PSD; The fibres' degeneration turned into regeneration at the 21st PSD; In group 2, the morphologic changes in this group were familiar with the group 1 while the degenerated fibers were more and dispersed in transection at the 7th PSD; Regeneration of nerve fibers happened at the 21st PSD. In group 3, most of the fibers
Full Text Available In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit method is used to speed up the convergence of OMP (orthogonal matching pursuit. Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan’s JAFFE and CMU’s CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.
Kret, Mariska E; Fischer, Agneta H
Recognising emotions from faces that are partly covered is more difficult than from fully visible faces. The focus of the present study is on the role of an Islamic versus non-Islamic context, i.e. Islamic versus non-Islamic headdress in perceiving emotions. We report an experiment that investigates whether briefly presented (40 ms) facial expressions of anger, fear, happiness and sadness are perceived differently when covered by a niqāb or turban, compared to a cap and shawl. In addition, we examined whether oxytocin, a neuropeptide regulating affection, bonding and cooperation between ingroup members and fostering outgroup vigilance and derogation, would differentially impact on emotion recognition from wearers of Islamic versus non-Islamic headdresses. The results first of all show that the recognition of happiness was more accurate when the face was covered by a Western compared to Islamic headdress. Second, participants more often incorrectly assigned sadness to a face covered by an Islamic headdress compared to a cap and shawl. Third, when correctly recognising sadness, they did so faster when the face was covered by an Islamic compared to Western headdress. Fourth, oxytocin did not modulate any of these effects. Implications for theorising about the role of group membership on emotion perception are discussed.
Full Text Available There is growing evidence that human observers are able to extract the mean emotion or other type of information from a set of faces. The most intriguing aspect of this phenomenon is that observers often fail to identify or form a representation for individual faces in a face set. However, most of these results were based on judgments under limited processing resource. We examined a wider range of exposure time and observed how the relationship between the extraction of a mean and representation of individual facial expressions would change. The results showed that with an exposure time of 50 milliseconds for the faces, observers were more sensitive to mean representation over individual representation, replicating the typical findings in the literature. With longer exposure time, however, observers were able to extract both individual and mean representation more accurately. Furthermore, diffusion model analysis revealed that the mean representation is also more prone to suffer from the noise accumulated in redundant processing time and leads to a more conservative decision bias, whereas individual representations seem more resistant to this noise. Results suggest that the encoding of emotional information from multiple faces may take two forms: single face processing and crowd face processing.
Huang, Y; Tang, S; Helmeste, D; Shioiri, T; Someya, T
Judging facial expressions of emotions has important clinical value in the assessment of psychiatric patients. Judging facial emotional expressions in foreign patients however, is not always easy. Controversy has existed in previous reports on cultural differences in identifying static facial expressions of emotions. While it has been argued that emotional expressions on the face are universally recognized, experimental data obtained were not necessarily totally supportive. Using the data reported in the literature, our previous pilot study showed that the Japanese interpreted many emotional expressions differently from USA viewers of the same emotions. In order to explore such discrepancies further, we conducted the same experiments on Chinese subjects residing in Beijing. The data showed that, similar to the Japanese viewers, Chinese viewers also judged many static facial emotional expressions differently from USA viewers. The combined results of the Chinese and the Japanese experiments suggest a major cross-cultural difference between American and Asian viewers in identifying some static facial emotional expressions, particularly when the posed emotion has negative connotations. The results have important implications for cross-cultural communications when facial emotional expressions are presented as static images.
Full Text Available In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach.
Murphy, Jillian M.; Ridley, Nicole J.; Vercammen, Ans
The orbitofrontal cortex (OFC) has been implicated in the capacity to accurately recognise facial expressions. The aim of the current study was to determine if anodal transcranial direct current stimulation (tDCS) targeting the right OFC in healthy adults would enhance facial expression recognition, compared with a sham condition. Across two counterbalanced sessions of tDCS (i.e. anodal and sham), 20 undergraduate participants (18 female) completed a facial expression labelling task comprising angry, disgusted, fearful, happy, sad and neutral expressions, and a control (social judgement) task comprising the same expressions. Responses on the labelling task were scored for accuracy, median reaction time and overall efficiency (i.e. combined accuracy and reaction time). Anodal tDCS targeting the right OFC enhanced facial expression recognition, reflected in greater efficiency and speed of recognition across emotions, relative to the sham condition. In contrast, there was no effect of tDCS to responses on the control task. This is the first study to demonstrate that anodal tDCS targeting the right OFC boosts facial expression recognition. This finding provides a solid foundation for future research to examine the efficacy of this technique as a means to treat facial expression recognition deficits, particularly in individuals with OFC damage or dysfunction. PMID:25971602
Owada, Keiho; Kojima, Masaki; Yassin, Walid; Kuroda, Miho; Kawakubo, Yuki; Kuwabara, Hitoshi; Kano, Yukiko; Yamasue, Hidenori
To develop novel interventions for autism spectrum disorder (ASD) core symptoms, valid, reliable, and sensitive longitudinal outcome measures are required for detecting symptom change over time. Here, we tested whether a computerized analysis of quantitative facial expression measures could act as a marker for core ASD social symptoms. Facial expression intensity values during a semi-structured socially interactive situation extracted from the Autism Diagnostic Observation Schedule (ADOS) were quantified by dedicated software in 18 high-functioning adult males with ASD. Controls were 17 age-, gender-, parental socioeconomic background-, and intellectual level-matched typically developing (TD) individuals. Statistical analyses determined whether values representing the strength and variability of each facial expression element differed significantly between the ASD and TD groups and whether they correlated with ADOS reciprocal social interaction scores. Compared with the TD controls, facial expressions in the ASD group appeared more "Neutral" (d = 1.02, P = 0.005, PFDR Neutral expression (d = 1.08, P = 0.003, PFDR 0.05) with lower variability in Happy expression (d = 1.10, P = 0.003, PFDR Neutral facial expressions in the ASD participants were positively correlated with poorer ADOS reciprocal social interaction scores (ρ = 0.48, P = 0.042). These findings indicate that our method for quantitatively measuring reduced facial expressivity during social interactions can be a promising marker for core ASD social symptoms.
Yan, Xiaoqian; Andrews, Timothy J; Young, Andrew W
The ability to recognize facial expressions of basic emotions is often considered a universal human ability. However, recent studies have suggested that this commonality has been overestimated and that people from different cultures use different facial signals to represent expressions (Jack, Blais, Scheepers, Schyns, & Caldara, 2009; Jack, Caldara, & Schyns, 2012). We investigated this possibility by examining similarities and differences in the perception and categorization of facial expressions between Chinese and white British participants using whole-face and partial-face images. Our results showed no cultural difference in the patterns of perceptual similarity of expressions from whole-face images. When categorizing the same expressions, however, both British and Chinese participants were slightly more accurate with whole-face images of their own ethnic group. To further investigate potential strategy differences, we repeated the perceptual similarity and categorization tasks with presentation of only the upper or lower half of each face. Again, the perceptual similarity of facial expressions was similar between Chinese and British participants for both the upper and lower face regions. However, participants were slightly better at categorizing facial expressions of their own ethnic group for the lower face regions, indicating that the way in which culture shapes the categorization of facial expressions is largely driven by differences in information decoding from this part of the face. (c) 2016 APA, all rights reserved).
Full Text Available The aim of the study was to investigate how the biological binding between different facial dimensions, and their social and communicative relevance, may impact updating processes in working memory (WM. We focused on WM updating because it plays a key role in ongoing processing. Gaze direction and facial expression are crucial and changeable components of face processing. Direct gaze enhances the processing of approach-oriented facial emotional expressions (e.g. joy, while averted gaze enhances the processing of avoidance-oriented facial emotional expressions (e.g. fear. Thus, the way in which these two facial dimensions are combined communicates to the observer important behavioral and social information. Updating of these two facial dimensions and their bindings has not been investigated before, despite the fact that they provide a piece of social information essential for building and maintaining an internal ongoing representation of our social environment. In Experiment 1 we created a task in which the binding between gaze direction and facial expression was manipulated: high binding conditions (e.g. joy-direct gaze were compared to low binding conditions (e.g. joy-averted gaze. Participants had to study and update continuously a number of faces, displaying different bindings between the two dimensions. In Experiment 2 we tested whether updating was affected by the social and communicative value of the facial dimension binding; to this end, we manipulated bindings between eye and hair color, two less communicative facial dimensions. Two new results emerged. First, faster response times were found in updating combinations of facial dimensions highly bound together. Second, our data showed that the ease of the ongoing updating processing varied depending on the communicative meaning of the binding that had to be updated. The results are discussed with reference to the role of WM updating in social cognition and appraisal processes.
Artuso, Caterina; Palladino, Paola; Ricciardelli, Paola
The aim of the study was to investigate how the biological binding between different facial dimensions, and their social and communicative relevance, may impact updating processes in working memory (WM). We focused on WM updating because it plays a key role in ongoing processing. Gaze direction and facial expression are crucial and changeable components of face processing. Direct gaze enhances the processing of approach-oriented facial emotional expressions (e.g., joy), while averted gaze enhances the processing of avoidance-oriented facial emotional expressions (e.g., fear). Thus, the way in which these two facial dimensions are combined communicates to the observer important behavioral and social information. Updating of these two facial dimensions and their bindings has not been investigated before, despite the fact that they provide a piece of social information essential for building and maintaining an internal ongoing representation of our social environment. In Experiment 1 we created a task in which the binding between gaze direction and facial expression was manipulated: high binding conditions (e.g., joy-direct gaze) were compared to low binding conditions (e.g., joy-averted gaze). Participants had to study and update continuously a number of faces, displaying different bindings between the two dimensions. In Experiment 2 we tested whether updating was affected by the social and communicative value of the facial dimension binding; to this end, we manipulated bindings between eye and hair color, two less communicative facial dimensions. Two new results emerged. First, faster response times were found in updating combinations of facial dimensions highly bound together. Second, our data showed that the ease of the ongoing updating processing varied depending on the communicative meaning of the binding that had to be updated. The results are discussed with reference to the role of WM updating in social cognition and appraisal processes.
Topçu, Çağdaş; Uysal, Hilmi; Özkan, Ömer; Özkan, Özlenen; Polat, Övünç; Bedeloğlu, Merve; Akgül, Arzu; Döğer, Ela Naz; Sever, Refik; Çolak, Ömer Halil
We assessed the recovery of 2 face transplantation patients with measures of complexity during neuromuscular rehabilitation. Cognitive rehabilitation methods and functional electrical stimulation were used to improve facial emotional expressions of full-face transplantation patients for 5 months. Rehabilitation and analyses were conducted at approximately 3 years after full facial transplantation in the patient group. We report complexity analysis of surface electromyography signals of these two patients in comparison to the results of 10 healthy individuals. Facial surface electromyography data were collected during 6 basic emotional expressions and 4 primary facial movements from 2 full-face transplantation patients and 10 healthy individuals to determine a strategy of functional electrical stimulation and understand the mechanisms of rehabilitation. A new personalized rehabilitation technique was developed using the wavelet packet method. Rehabilitation sessions were applied twice a month for 5 months. Subsequently, motor and functional progress was assessed by comparing the fuzzy entropy of surface electromyography data against the results obtained from patients before rehabilitation and the mean results obtained from 10 healthy subjects. At the end of personalized rehabilitation, the patient group showed improvements in their facial symmetry and their ability to perform basic facial expressions and primary facial movements. Similarity in the pattern of fuzzy entropy for facial expressions between the patient group and healthy individuals increased. Synkinesis was detected during primary facial movements in the patient group, and one patient showed synkinesis during the happiness expression. Synkinesis in the lower face region of one of the patients was eliminated for the lid tightening movement. The recovery of emotional expressions after personalized rehabilitation was satisfactory to the patients. The assessment with complexity analysis of sEMG data can be
Ruys, Kirsten I; Stapel, Diederik A
Facial emotional expressions can serve both as emotional stimuli and as communicative signals. The research reported here was conducted to illustrate how responses to both roles of facial emotional expressions unfold over time. As an emotion elicitor, a facial emotional expression (e.g., a disgusted face) activates a response that is similar to responses to other emotional stimuli of the same valence (e.g., a dirty, nonflushed toilet). As an emotion messenger, the same facial expression (e.g., a disgusted face) serves as a communicative signal by also activating the knowledge that the sender is experiencing a specific emotion (e.g., the sender feels disgusted). By varying the duration of exposure to disgusted, fearful, angry, and neutral faces in two subliminal-priming studies, we demonstrated that responses to faces as emotion elicitors occur prior to responses to faces as emotion messengers, and that both types of responses may unfold unconsciously.
Hale, WW; Jansen, JHC; Bouhuys, AL; van den Hoofdakker, RH
Background: Research has shown that cognitive and interpersonal processes play significant roles in depression development and maintenance. Depressed patients judgments of emotions displayed in facial expressions, as well as those of their partners, allow for better understanding of these processes.
Full Text Available Identifying facial expressions is crucial for social interactions. Functional neuroimaging studies show that a set of brain areas, such as the fusiform gyrus and amygdala, become active when viewing emotional facial expressions. The majority of functional magnetic resonance imaging (fMRI studies investigating face perception typically employ static images of faces. However, studies that use dynamic facial expressions (e.g., videos are accumulating and suggest that a dynamic presentation may be more sensitive and ecologically valid for investigating faces. By using quantitative fMRI meta-analysis the present study examined concordance of brain regions associated with viewing dynamic facial expressions. We analyzed data from 216 participants that participated in 14 studies, which reported coordinates for 28 experiments. Our analysis revealed bilateral fusiform and middle temporal gyri, left amygdala, left declive of the cerebellum and the right inferior frontal gyrus. These regions are discussed in terms of their relation to models of face processing.
Demenescu, Liliana R.; Kortekaas, Rudie; den Boer, Johan A.; Aleman, Andre
Background: Recognition of others' emotions is an important aspect of interpersonal communication. In major depression, a significant emotion recognition impairment has been reported. It remains unclear whether the ability to recognize emotion from facial expressions is also impaired in anxiety
Mamonto, N. E.; Maulana, H.; Liliana, D. Y.; Basaruddin, T.
Datasets that have been developed before contain facial expression from foreign people. The development of multimedia content aims to answer the problems experienced by the research team and other researchers who will conduct similar research. The method used in the development of multimedia content as facial expression datasets for human emotion recognition is the Villamil-Molina version of the multimedia development method. Multimedia content developed with 10 subjects or talents with each talent performing 3 shots with each capturing talent having to demonstrate 19 facial expressions. After the process of editing and rendering, tests are carried out with the conclusion that the multimedia content can be used as a facial expression dataset for recognition of human emotions.
Zinchenko, Oksana; Yaple, Zachary A; Arsalidou, Marie
Identifying facial expressions is crucial for social interactions. Functional neuroimaging studies show that a set of brain areas, such as the fusiform gyrus and amygdala, become active when viewing emotional facial expressions. The majority of functional magnetic resonance imaging (fMRI) studies investigating face perception typically employ static images of faces. However, studies that use dynamic facial expressions (e.g., videos) are accumulating and suggest that a dynamic presentation may be more sensitive and ecologically valid for investigating faces. By using quantitative fMRI meta-analysis the present study examined concordance of brain regions associated with viewing dynamic facial expressions. We analyzed data from 216 participants that participated in 14 studies, which reported coordinates for 28 experiments. Our analysis revealed bilateral fusiform and middle temporal gyri, left amygdala, left declive of the cerebellum and the right inferior frontal gyrus. These regions are discussed in terms of their relation to models of face processing.
Sun, Bo; Li, Liandong; Zhou, Guoyan; He, Jun
Facial expression recognition in the wild is a very challenging task. We describe our work in static and continuous facial expression recognition in the wild. We evaluate the recognition results of gray deep features and color deep features, and explore the fusion of multimodal texture features. For the continuous facial expression recognition, we design two temporal-spatial dense scale-invariant feature transform (SIFT) features and combine multimodal features to recognize expression from image sequences. For the static facial expression recognition based on video frames, we extract dense SIFT and some deep convolutional neural network (CNN) features, including our proposed CNN architecture. We train linear support vector machine and partial least squares classifiers for those kinds of features on the static facial expression in the wild (SFEW) and acted facial expression in the wild (AFEW) dataset, and we propose a fusion network to combine all the extracted features at decision level. The final achievement we gained is 56.32% on the SFEW testing set and 50.67% on the AFEW validation set, which are much better than the baseline recognition rates of 35.96% and 36.08%.
Ana Maria Draghici
Full Text Available 'The study attempted to replicate the findings of Masuda and colleagues (2008, testing a modified hypothesis: when judging people’s emotions from facial expressions, interdependence-primed participants, in contrast to independence-primed participants, incorporate information from the social context, i.e. facial expressions of surrounding people. This was done in order to check if self construal could be the main variable influencing the cultural differences in emotion perception documented by Masuda and colleagues. Participants viewed cartoon images depicting a happy, sad, or neutral character in its facial expression, surrounded by other characters expressing graded congruent or incongruent facial expressions. The hypothesis was only (partially confirmed for the emotional judgments of a neutral facial expression target. However, a closer look at the individual means indicated both assimilation and contrast effects, without a systematic manner in which the background characters' facial expressions would have been incorporated in the participants' judgments, for either of the priming groups. The results are discussed in terms of priming and priming success, and possible moderators other than self-construal for the effect found by Masuda and colleagues.'
Full Text Available To develop novel interventions for autism spectrum disorder (ASD core symptoms, valid, reliable, and sensitive longitudinal outcome measures are required for detecting symptom change over time. Here, we tested whether a computerized analysis of quantitative facial expression measures could act as a marker for core ASD social symptoms. Facial expression intensity values during a semi-structured socially interactive situation extracted from the Autism Diagnostic Observation Schedule (ADOS were quantified by dedicated software in 18 high-functioning adult males with ASD. Controls were 17 age-, gender-, parental socioeconomic background-, and intellectual level-matched typically developing (TD individuals. Statistical analyses determined whether values representing the strength and variability of each facial expression element differed significantly between the ASD and TD groups and whether they correlated with ADOS reciprocal social interaction scores. Compared with the TD controls, facial expressions in the ASD group appeared more "Neutral" (d = 1.02, P = 0.005, PFDR 0.05 with lower variability in Happy expression (d = 1.10, P = 0.003, PFDR < 0.05. Moreover, the stronger Neutral facial expressions in the ASD participants were positively correlated with poorer ADOS reciprocal social interaction scores (ρ = 0.48, P = 0.042. These findings indicate that our method for quantitatively measuring reduced facial expressivity during social interactions can be a promising marker for core ASD social symptoms.
Zhang, Hui; Japee, Shruti; Nolan, Rachel; Chu, Carlton; Liu, Ning; Ungerleider, Leslie G
Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they were shown 32 face pictures, portraying four different expressions: neutral, fearful, angry, and happy and belonging to eight different identities. Our results showed that only the amygdala and the posterior superior temporal sulcus (STS) were able to accurately discriminate between these expressions, albeit in different ways: the amygdala discriminated fearful faces from non-fearful faces, whereas STS discriminated neutral from emotional (fearful, angry and happy) faces. In contrast to these findings on the classification of emotional expression, only the fusiform face area (FFA) and anterior inferior temporal cortex (aIT) could discriminate among the various facial identities. Further, the amygdala and STS were better than FFA and aIT at classifying expression, while FFA and aIT were better than the amygdala and STS at classifying identity. Taken together, our findings indicate that the decoding of facial emotion and facial identity occurs in different neural substrates: the amygdala and STS for the former and FFA and aIT for the latter. Published by Elsevier Inc.
Chiovetto, Enrico; Curio, Cristóbal; Endres, Dominik; Giese, Martin
According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.
Doi, Hirokazu; Fujisawa, Takashi X; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki
This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group difference in facial expression recognition was prominent for stimuli with low or intermediate emotional intensities. In contrast to this, the individuals with Asperger syndrome exhibited lower recognition accuracy than typically-developed controls mainly for emotional prosody with high emotional intensity. In facial expression recognition, Asperger and control groups showed an inversion effect for all categories. The magnitude of this effect was less in the Asperger group for angry and sad expressions, presumably attributable to reduced recruitment of the configural mode of face processing. The individuals with Asperger syndrome outperformed the control participants in recognizing inverted sad expressions, indicating enhanced processing of local facial information representing sad emotion. These results suggest that the adults with Asperger syndrome rely on modality-specific strategies in emotion recognition from facial expression and prosodic information.
Dawes, Thomas Richard; Eden-Green, Ben; Rosten, Claire; Giles, Julian; Governo, Ricardo; Marcelline, Francesca; Nduka, Charles
Currently, clinicians observe pain-related behaviors and use patient self-report measures in order to determine pain severity. This paper reviews the evidence when facial expression is used as a measure of pain. We review the literature reporting the relevance of facial expression as a diagnostic measure, which facial movements are indicative of pain, and whether such movements can be reliably used to measure pain. We conclude that although the technology for objective pain measurement is not yet ready for use in clinical settings, the potential benefits to patients in improved pain management, combined with the advances being made in sensor technology and artificial intelligence, provide opportunities for research and innovation.
Aviezer, Hillel; Hassin, Ran. R.; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo
The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG’s impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face’s emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG’s performance was strongly influenced by the diagnosticity of the components: His emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. PMID:22349446
Aviezer, Hillel; Hassin, Ran R; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo
The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG's impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face's emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG's performance was strongly influenced by the diagnosticity of the components: his emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components. Copyright © 2012 Elsevier Ltd. All rights reserved.
Sousa, R.; Rodrigues, J. M. F.; du Buf, J. M. H.
Face-to-face communications between humans involve emotions, which often are unconsciously conveyed by facial expressions and body gestures. Intelligent human-machine interfaces, for example in cognitive robotics, need to recognize emotions. This paper addresses facial expressions and their neural correlates on the basis of a model of the visual cortex: the multi-scale line and edge coding. The recognition model links the cortical representation with Paul Ekman's Action Units which are relate...
Woloszyn, Michael R.; Ewert, Laura
The effect of the emotional quality of study-phase background music on subsequent recall for happy and sad facial expressions was investigated. Undergraduates (N = 48) viewed a series of line drawings depicting a happy or sad child in a variety of environments that were each accompanied by happy or sad music. Although memory for faces was very accurate, emotionally incongruent background music biased subsequent memory for facial expressions, increasing the likelihood that happy faces were rec...
A number of dermatologic procedures are intended to reduce facial wrinkles. This article is about wrinkles as a statement of art. This article explores how frown lines and other facial wrinkles are used in visual art to feature personal peculiarities and accentuate specific feelings or moods. Facial lines as an artistic element emerged with advanced painting techniques evolving during the Renaissance and following periods. The skill to paint fine details, the use of light and shadow, and the understanding of space that allowed for a three-dimensional presentation of the human face were essential prerequisites. Painters used facial lines to emphasize respected values such as dignity, determination, diligence, and experience. Facial lines, however, were often accentuated to portrait negative features such as anger, fear, aggression, sadness, exhaustion, and decay. This has reinforced a cultural stigma of facial wrinkles expressing not only age but also misfortune, dismay, or even tragedy. Removing wrinkles by dermatologic procedures may not only aim to make people look younger but also to liberate them from unwelcome negative connotations. On the other hand, consideration and care must be taken-especially when interfering with facial muscles-to preserve a natural balance of emotional facial expressions.
Recio, Guillermo; Wilhelm, Oliver; Sommer, Werner; Hildebrandt, Andrea
Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain-behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = -.51) and memory (r = -.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.
Full Text Available Abstract Background Impairment of social interaction via facial expressions represents a core clinical feature of autism spectrum disorders (ASD. However, the neural correlates of this dysfunction remain unidentified. Because this dysfunction is manifested in real-life situations, we hypothesized that the observation of dynamic, compared with static, facial expressions would reveal abnormal brain functioning in individuals with ASD. We presented dynamic and static facial expressions of fear and happiness to individuals with high-functioning ASD and to age- and sex-matched typically developing controls and recorded their brain activities using functional magnetic resonance imaging (fMRI. Result Regional analysis revealed reduced activation of several brain regions in the ASD group compared with controls in response to dynamic versus static facial expressions, including the middle temporal gyrus (MTG, fusiform gyrus, amygdala, medial prefrontal cortex, and inferior frontal gyrus (IFG. Dynamic causal modeling analyses revealed that bi-directional effective connectivity involving the primary visual cortex–MTG–IFG circuit was enhanced in response to dynamic as compared with static facial expressions in the control group. Group comparisons revealed that all these modulatory effects were weaker in the ASD group than in the control group. Conclusions These results suggest that weak activity and connectivity of the social brain network underlie the impairment in social interaction involving dynamic facial expressions in individuals with ASD.
Gülkesen, Kemal Hakan; Isleyen, Filiz; Cinemre, Buket; Samur, Mehmet Kemal; Sen Kaya, Semiha; Zayim, Nese
Recognizing facial expressions is an important social skill. In some psychological disorders such as schizophrenia, loss of this skill may complicate the patient's daily life. Prior research has shown that information technology may help to develop facial expression recognition skills through educational software and games. To examine if a computer game designed for teaching facial expressions would improve facial expression recognition skills of patients with schizophrenia. We developed a website composed of eight serious games. Thirty-two patients were given a pre-test composed of 21 facial expression photographs. Eighteen patients were in the study group while 14 were in the control group. Patients in the study group were asked to play the games on the website. After a period of one month, we performed a post-test for all patients. The median score of the correct answers was 17.5 in the control group whereas it was 16.5 in the study group (of 21) in pretest. The median post-test score was 18 in the control group (p=0.052) whereas it was 20 in the study group (pgames may be used for the purpose of educating people who have difficulty in recognizing facial expressions.
Capito, Eva Susanne; Lautenbacher, Stefan; Horn-Hofmann, Claudia
Background As known from everyday experience and experimental research, alcohol modulates emotions. Particularly regarding social interaction, the effects of alcohol on the facial expression of emotion might be of relevance. However, these effects have not been systematically studied. We performed a systematic review on acute alcohol effects on social drinkers’ facial expressions of induced positive and negative emotions. Materials and methods With a predefined algorithm, we searched three electronic databases (PubMed, PsycInfo, and Web of Science) for studies conducted on social drinkers that used acute alcohol administration, emotion induction, and standardized methods to record facial expressions. We excluded those studies that failed common quality standards, and finally selected 13 investigations for this review. Results Overall, alcohol exerted effects on facial expressions of emotions in social drinkers. These effects were not generally disinhibiting, but varied depending on the valence of emotion and on social interaction. Being consumed within social groups, alcohol mostly influenced facial expressions of emotions in a socially desirable way, thus underscoring the view of alcohol as social lubricant. However, methodical differences regarding alcohol administration between the studies complicated comparability. Conclusion Our review highlighted the relevance of emotional valence and social-context factors for acute alcohol effects on social drinkers’ facial expressions of emotions. Future research should investigate how these alcohol effects influence the development of problematic drinking behavior in social drinkers. PMID:29255375
Full Text Available The neuropeptide oxytocin plays a critical role in social behavior and emotion regulation in mammals. The aim of this study was to explore how nasal oxytocin administration affects gazing behavior during emotional perception in domestic dogs. Looking patterns of dogs, as a measure of voluntary attention, were recorded during the viewing of human facial expression photographs. The pupil diameters of dogs were also measured as a physiological index of emotional arousal. In a placebo-controlled within-subjects experimental design, 43 dogs, after having received either oxytocin or placebo (saline nasal spray treatment, were presented with pictures of unfamiliar male human faces displaying either a happy or an angry expression. We found that, depending on the facial expression, the dogs’ gaze patterns were affected selectively by oxytocin treatment. After receiving oxytocin, dogs fixated less often on the eye regions of angry faces and revisited (glanced back at more often the eye regions of smiling (happy faces than after the placebo treatment. Furthermore, following the oxytocin treatment dogs fixated and revisited the eyes of happy faces significantly more often than the eyes of angry faces. The analysis of dogs’ pupil diameters during viewing of human facial expressions indicated that oxytocin may also have a modulatory effect on dogs’ emotional arousal. While subjects’ pupil sizes were significantly larger when viewing angry faces than happy faces in the control (placebo treatment condition, oxytocin treatment not only eliminated this effect but caused an opposite pupil response. Overall, these findings suggest that nasal oxytocin administration selectively changes the allocation of attention and emotional arousal in domestic dogs. Oxytocin has the potential to decrease vigilance toward threatening social stimuli and increase the salience of positive social stimuli thus making eye gaze of friendly human faces more salient for dogs. Our
Prigent, Elise; Amorim, Michel-Ange; de Oliveira, Armando Mónica
Humans have developed a specific capacity to rapidly perceive and anticipate other people's facial expressions so as to get an immediate impression of their emotional state of mind. We carried out two experiments to examine the perceptual and memory dynamics of facial expressions of pain. In the first experiment, we investigated how people estimate other people's levels of pain based on the perception of various dynamic facial expressions; these differ both in terms of the amount and intensity of activated action units. A second experiment used a representational momentum (RM) paradigm to study the emotional anticipation (memory bias) elicited by the same facial expressions of pain studied in Experiment 1. Our results highlighted the relationship between the level of perceived pain (in Experiment 1) and the direction and magnitude of memory bias (in Experiment 2): When perceived pain increases, the memory bias tends to be reduced (if positive) and ultimately becomes negative. Dynamic facial expressions of pain may reenact an "immediate perceptual history" in the perceiver before leading to an emotional anticipation of the agent's upcoming state. Thus, a subtle facial expression of pain (i.e., a low contraction around the eyes) that leads to a significant positive anticipation can be considered an adaptive process-one through which we can swiftly and involuntarily detect other people's pain.
Full Text Available Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth. We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral. We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel
Full Text Available Altered facial expressions of emotions are characteristic impairments in schizophrenia. Ratings of affect have traditionally been limited to clinical rating scales and facial muscle movement analysis, which require extensive training and have limitations based on methodology and ecological validity. To improve reliable assessment of dynamic facial expression changes, we have developed automated measurements of facial emotion expressions based on information-theoretic measures of expressivity of ambiguity and distinctiveness of facial expressions. These measures were examined in matched groups of persons with schizophrenia (n=28 and healthy controls (n=26 who underwent video acquisition to assess expressivity of basic emotions (happiness, sadness, anger, fear, and disgust in evoked conditions. Persons with schizophrenia scored higher on ambiguity, the measure of conditional entropy within the expression of a single emotion, and they scored lower on distinctiveness, the measure of mutual information across expressions of different emotions. The automated measures compared favorably with observer-based ratings. This method can be applied for delineating dynamic emotional expressivity in healthy and clinical populations.
Yitzhak, Neta; Giladi, Nir; Gurevich, Tanya; Messinger, Daniel S; Prince, Emily B; Martin, Katherine; Aviezer, Hillel
According to dominant theories of affect, humans innately and universally express a set of emotions using specific configurations of prototypical facial activity. Accordingly, thousands of studies have tested emotion recognition using sets of highly intense and stereotypical facial expressions, yet their incidence in real life is virtually unknown. In fact, a commonplace experience is that emotions are expressed in subtle and nonprototypical forms. Such facial expressions are at the focus of the current study. In Experiment 1, we present the development and validation of a novel stimulus set consisting of dynamic and subtle emotional facial displays conveyed without constraining expressers to using prototypical configurations. Although these subtle expressions were more challenging to recognize than prototypical dynamic expressions, they were still well recognized by human raters, and perhaps most importantly, they were rated as more ecological and naturalistic than the prototypical expressions. In Experiment 2, we examined the characteristics of subtle versus prototypical expressions by subjecting them to a software classifier, which used prototypical basic emotion criteria. Although the software was highly successful at classifying prototypical expressions, it performed very poorly at classifying the subtle expressions. Further validation was obtained from human expert face coders: Subtle stimuli did not contain many of the key facial movements present in prototypical expressions. Together, these findings suggest that emotions may be successfully conveyed to human viewers using subtle nonprototypical expressions. Although classic prototypical facial expressions are well recognized, they appear less naturalistic and may not capture the richness of everyday emotional communication. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Tromans, James Matthew; Harris, Mitchell; Stringer, Simon Maitland
Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.
James Matthew Tromans
Full Text Available Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE respond primarily to facial identity, while cells within the superior temporal sulcus (STS respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs, with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.
Jiang, Yi; Shannon, Robert W; Vizueta, Nathalie; Bernat, Edward M; Patrick, Christopher J; He, Sheng
The fusiform face area (FFA) and the superior temporal sulcus (STS) are suggested to process facial identity and facial expression information respectively. We recently demonstrated a functional dissociation between the FFA and the STS as well as correlated sensitivity of the STS and the amygdala to facial expressions using an interocular suppression paradigm [Jiang, Y., He, S., 2006. Cortical responses to invisible faces: dissociating subsystems for facial-information processing. Curr. Biol. 16, 2023-2029.]. In the current event-related brain potential (ERP) study, we investigated the temporal dynamics of facial information processing. Observers viewed neutral, fearful, and scrambled face stimuli, either visibly or rendered invisible through interocular suppression. Relative to scrambled face stimuli, intact visible faces elicited larger positive P1 (110-130 ms) and larger negative N1 or N170 (160-180 ms) potentials at posterior occipital and bilateral occipito-temporal regions respectively, with the N170 amplitude significantly greater for fearful than neutral faces. Invisible intact faces generated a stronger signal than scrambled faces at 140-200 ms over posterior occipital areas whereas invisible fearful faces (compared to neutral and scrambled faces) elicited a significantly larger negative deflection starting at 220 ms along the STS. These results provide further evidence for cortical processing of facial information without awareness and elucidate the temporal sequence of automatic facial expression information extraction.
Vaiman, Marcelo; Wagner, Mónica Anna; Caicedo, Estefanía; Pereno, Germán Leandro
Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.
Kraaijenvanger, Eline J.; Hofman, Dennis; Bos, Peter A.
Facial expressions are considered central in conveying information about one's emotional state. During social encounters, facial expressions of another individual are often automatically imitated by the observer, a process referred to as ‘facial mimicry’. This process is assumed to facilitate
Young, Steven G; Thorstenson, Christopher A; Pazda, Adam D
Past research has found that skin colouration, particularly facial redness, influences the perceived health and emotional state of target individuals. In the current work, we explore several extensions of this past research. In Experiment 1, we manipulated facial redness incrementally on neutral and angry faces and had participants rate each face for anger and health. Different red effects emerged, as perceived anger increased in a linear manner as facial redness increased. Health ratings instead showed a curvilinear trend, as both extreme paleness and redness were rated as less healthy than moderate levels of red. Experiment 2 replicated and extended these findings by manipulating the masculinity of both angry and neutral faces that varied in redness. The results found the effect of red on perceived anger and health was moderated by masculine face structure. Collectively, these results show that facial redness has context dependent effects that vary based on facial expression, appearance, and differentially impact ratings of emotional states and health.
Maki, Yohko; Yoshida, Hiroshi; Yamaguchi, Tomoharu; Yamaguchi, Haruyasu
Positivity recognition bias has been reported for facial expression as well as memory and visual stimuli in aged individuals, whereas emotional facial recognition in Alzheimer disease (AD) patients is controversial, with possible involvement of confounding factors such as deficits in spatial processing of non-emotional facial features and in verbal processing to express emotions. Thus, we examined whether recognition of positive facial expressions was preserved in AD patients, by adapting a new method that eliminated the influences of these confounding factors. Sensitivity of six basic facial expressions (happiness, sadness, surprise, anger, disgust, and fear) was evaluated in 12 outpatients with mild AD, 17 aged normal controls (ANC), and 25 young normal controls (YNC). To eliminate the factors related to non-emotional facial features, averaged faces were prepared as stimuli. To eliminate the factors related to verbal processing, the participants were required to match the images of stimulus and answer, avoiding the use of verbal labels. In recognition of happiness, there was no difference in sensitivity between YNC and ANC, and between ANC and AD patients. AD patients were less sensitive than ANC in recognition of sadness, surprise, and anger. ANC were less sensitive than YNC in recognition of surprise, anger, and disgust. Within the AD patient group, sensitivity of happiness was significantly higher than those of the other five expressions. In AD patient, recognition of happiness was relatively preserved; recognition of happiness was most sensitive and was preserved against the influences of age and disease.
Leiva, Samanta; Margulis, Laura; Micciulli, Andrea; Ferreres, Aldo
Existing single-case studies have reported deficit in recognizing basic emotions through facial expression and unaffected performance with body expressions, but not the opposite pattern. The aim of this paper is to present a case study with impaired emotion recognition through body expressions and intact performance with facial expressions. In this single-case study we assessed a 30-year-old patient with autism spectrum disorder, without intellectual disability, and a healthy control group (n = 30) with four tasks of basic and complex emotion recognition through face and body movements, and two non-emotional control tasks. To analyze the dissociation between facial and body expressions, we used Crawford and Garthwaite's operational criteria, and we compared the patient and the control group performance with a modified one-tailed t-test designed specifically for single-case studies. There were no statistically significant differences between the patient's and the control group's performances on the non-emotional body movement task or the facial perception task. For both kinds of emotions (basic and complex) when the patient's performance was compared to the control group's, statistically significant differences were only observed for the recognition of body expressions. There were no significant differences between the patient's and the control group's correct answers for emotional facial stimuli. Our results showed a profile of impaired emotion recognition through body expressions and intact performance with facial expressions. This is the first case study that describes the existence of this kind of dissociation pattern between facial and body expressions of basic and complex emotions.
Matsumoto, David; Willingham, Bob
The study of the spontaneous expressions of blind individuals offers a unique opportunity to understand basic processes concerning the emergence and source of facial expressions of emotion. In this study, the authors compared the expressions of congenitally and noncongenitally blind athletes in the 2004 Paralympic Games with each other and with those produced by sighted athletes in the 2004 Olympic Games. The authors also examined how expressions change from 1 context to another. There were no differences between congenitally blind, noncongenitally blind, and sighted athletes, either on the level of individual facial actions or in facial emotion configurations. Blind athletes did produce more overall facial activity, but these were isolated to head and eye movements. The blind athletes' expressions differentiated whether they had won or lost a medal match at 3 different points in time, and there were no cultural differences in expression. These findings provide compelling evidence that the production of spontaneous facial expressions of emotion is not dependent on observational learning but simultaneously demonstrates a learned component to the social management of expressions, even among blind individuals.
Ross, Elliott D; Gupta, Smita S; Adnan, Asif M; Holden, Thomas L; Havlicek, Joseph; Radhakrishnan, Sridhar
Facial expressions are described traditionally as monolithic entities. However, humans have the capacity to produce facial blends, in which the upper and lower face simultaneously display different emotional expressions. This, in turn, has led to the Component Theory of facial expressions. Recent neuroanatomical studies in monkeys have demonstrated that there are separate cortical motor areas for controlling the upper and lower face that, presumably, also occur in humans. The lower face is represented on the posterior ventrolateral surface of the frontal lobes in the primary motor and premotor cortices and the upper face is represented on the medial surface of the posterior frontal lobes in the supplementary motor and anterior cingulate cortices. Our laboratory has been engaged in a series of studies exploring the perception and production of facial blends. Using high-speed videography, we began measuring the temporal aspects of facial expressions to develop a more complete understanding of the neurophysiology underlying facial expressions and facial blends. The goal of the research presented here was to determine if spontaneous facial expressions in adults are predominantly monolithic or exhibit independent motor control of the upper and lower face. We found that spontaneous facial expressions are very complex and that the motor control of the upper and lower face is overwhelmingly independent, thus robustly supporting the Component Theory of facial expressions. Seemingly monolithic expressions, be they full facial or facial blends, are most likely the result of a timing coincident rather than a synchronous coordination between the ventrolateral and medial cortical motor areas responsible for controlling the lower and upper face, respectively. In addition, we found evidence that the right and left face may also exhibit independent motor control, thus supporting the concept that spontaneous facial expressions are organized predominantly across the horizontal facial
Stagg, Steven D.; Slavny, Rachel; Hand, Charlotte; Cardoso, Alice; Smith, Pamela
Research investigating expressivity in children with autism spectrum disorder has reported flat affect or bizarre facial expressivity within this population; however, the impact expressivity may have on first impression formation has received little research input. We examined how videos of children with autism spectrum disorder were rated for…
Gao, Xiaoqing; Maurer, Daphne
Most previous studies investigating children's ability to recognize facial expressions used only intense exemplars. Here we compared the sensitivity of 5-, 7-, and 10-year-olds with that of adults (n = 24 per age group) for less intense expressions of happiness, sadness, and fear. The developmental patterns differed across expressions. For…
There is mixed evidence on the nature of the relationship between the perception of gaze direction and the perception of facial expressions. Major support for shared processing of gaze and expression comes from behavioral studies that showed that observers cannot process expression or gaze and ignore irrelevant variations in the other dimension.…
Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J.; Kilner, James
Background and aim Parkinson’s disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Methods Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. Results For emotion recognition, PD reported lower score than HC for Ekman total score (pemotions sub-scores happiness, fear, anger, sadness (pfacial emotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all pemotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). Conclusions PD
Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J; Kilner, James
Parkinson's disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. For emotion recognition, PD reported lower score than HC for Ekman total score (pemotions sub-scores happiness, fear, anger, sadness (pfacial emotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all pemotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). PD patients showed difficulties in recognizing emotional
Kujala, Miiamaaria V.; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri
Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people’s perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects’ personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs’ emotional facial expressions. PMID:28114335
Miiamaaria V Kujala
Full Text Available Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people's perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects' personality (the Big Five Inventory, empathy (Interpersonal Reactivity Index and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs' emotional facial expressions.
Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun
Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.
Full Text Available Facial expressions are one of the most important means of nonverbal communication transporting both emotional and conversational content. For investigating this large space of expressions we recently developed a large database containing dynamic emotional and conversational expressions in Germany (MPI facial expression database. As facial expressions crucially depend on the cultural context, however, a similar resource is needed for studies outside of Germany. Here, we introduce and validate a new, extensive Korean facial expression database containing dynamic emotional and conversational information. Ten individuals performed 62 expressions following a method-acting protocol, in which each person was asked to imagine themselves in one of 62 corresponding everyday scenarios and to react accordingly. To validate this database, we conducted two experiments: 20 participants were asked to name the appropriate expression for each of the 62 everyday scenarios shown as text. Ten additional participants were asked to name each of the 62 expression videos from 10 actors in addition to rating its naturalness. All naming answers were then rated as valid or invalid. Scenario validation yielded 89% valid answers showing that the scenarios are effective in eliciting appropriate expressions. Video sequences were judged as natural with an average of 66% valid answers. This is an excellent result considering that videos were seen without any conversational context and that 62 expressions were to be recognized. These results validate our Korean database and, as they also parallel the German validation results, will enable detailed cross-cultural comparisons of the complex space of emotional and conversational expressions.
Kavallakis, George; Vidakis, Nikolaos; Triantafyllidis, Georgios
This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use...... of a neural network system using the features extracted by the SIFT algorithm. Also we support the need of this fusion of different expression recognition algorithms, because of the way that emotions are linked to facial expressions in music video clips....
Nelson, Nicole L; Russell, James A
In a classic study, children were shown an array of facial expressions and asked to choose the person who expressed a specific emotion. Children were later asked to name the emotion in the face with any label they wanted. Subsequent research often relied on the same two tasks--choice from array and free labeling--to support the conclusion that children recognize basic emotions from facial expressions. Here five studies (N=120, 2- to 10-year-olds) showed that these two tasks produce illusory recognition; a novel nonsense facial expression was included in the array. Children "recognized" a nonsense emotion (pax or tolen) and two familiar emotions (fear and jealousy) from the same nonsense face. Children likely used a process of elimination; they paired the unknown facial expression with a label given in the choice-from-array task and, after just two trials, freely labeled the new facial expression with the new label. These data indicate that past studies using this method may have overestimated children's expression knowledge. Copyright © 2015 Elsevier Inc. All rights reserved.
Ross, Elliott D; Pulusu, Vinay K
Clinical research has indicated that the left face is more expressive than the right face, suggesting that modulation of facial expressions is lateralized to the right hemisphere. The findings, however, are controversial because the results explain, on average, approximately 4% of the data variance. Using high-speed videography, we sought to determine if movement-onset asymmetry was a more powerful research paradigm than terminal movement asymmetry. The results were very robust, explaining up to 70% of the data variance. Posed expressions began overwhelmingly on the right face whereas spontaneous expressions began overwhelmingly on the left face. This dichotomy was most robust for upper facial expressions. In addition, movement-onset asymmetries did not predict terminal movement asymmetries, which were not significantly lateralized. The results support recent neuroanatomic observations that upper versus lower facial movements have different forebrain motor representations and recent behavioral constructs that posed versus spontaneous facial expressions are modulated preferentially by opposite cerebral hemispheres and that spontaneous facial expressions are graded rather than non-graded movements. Published by Elsevier Ltd.
Full Text Available Individual genetic differences in the serotonin transporter-linked polymorphic region (5-HTTLPR have been associated with variations in the sensitivity to social and emotional cues as well as altered amygdala reactivity to facial expressions of emotion. Amygdala activation has further been shown to trigger gaze changes towards diagnostically relevant facial features. The current study examined whether altered socio-emotional reactivity in variants of the 5-HTTLPR promoter polymorphism reflects individual differences in attending to diagnostic features of facial expressions. For this purpose, visual exploration of emotional facial expressions was compared between a low (n=39 and a high (n=40 5-HTT expressing group of healthy human volunteers in an eye tracking paradigm. Emotional faces were presented while manipulating the initial fixation such that saccadic changes towards the eyes and towards the mouth could be identified. We found that the low versus the high 5-HTT group demonstrated greater accuracy with regard to emotion classifications, particularly when faces were presented for a longer duration. No group differences in gaze orientation towards diagnostic facial features could be observed. However, participants in the low 5-HTT group exhibited more and faster fixation changes for certain emotions when faces were presented for a longer duration and overall face fixation times were reduced for this genotype group. These results suggest that the 5-HTT gene influences social perception by modulating the general vigilance to social cues rather than selectively affecting the pre-attentive detection of diagnostic facial features.
Fiset, Daniel; Blais, Caroline; Royer, Jessica; Richoz, Anne-Raphaëlle; Dugas, Gabrielle; Caldara, Roberto
Acquired prosopagnosia is characterized by a deficit in face recognition due to diverse brain lesions, but interestingly most prosopagnosic patients suffering from posterior lesions use the mouth instead of the eyes for face identification. Whether this bias is present for the recognition of facial expressions of emotion has not yet been addressed. We tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions dedicated for facial expression recognition. PS used mostly the mouth to recognize facial expressions even when the eye area was the most diagnostic. Moreover, PS directed most of her fixations towards the mouth. Her impairment was still largely present when she was instructed to look at the eyes, or when she was forced to look at them. Control participants showed a performance comparable to PS when only the lower part of the face was available. These observations suggest that the deficits observed in PS with static images are not solely attentional, but are rooted at the level of facial information use. This study corroborates neuroimaging findings suggesting that the Occipital Face Area might play a critical role in extracting facial features that are integrated for both face identification and facial expression recognition in static images. © The Author (2017). Published by Oxford University Press.
Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola
Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.
Manas K. Mandal
Full Text Available Recent research indicates that (a the perception and expression of facial emotion are lateralized to a great extent in the right hemisphere, and, (b whereas facial expressions of emotion embody universal signals, culture-specific learning moderates the expression and interpretation of these emotions. In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others are culturally influenced. We suggest that the left side of the face is more expressive of emotions, is more uninhibited, and displays culture-specific emotional norms. The right side of face, on the other hand, is less susceptible to cultural display norms and exhibits more universal emotional signals.
Mandal, Manas K; Ambady, Nalini
Recent research indicates that (a) the perception and expression of facial emotion are lateralized to a great extent in the right hemisphere, and, (b) whereas facial expressions of emotion embody universal signals, culture-specific learning moderates the expression and interpretation of these emotions. In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others are culturally influenced. We suggest that the left side of the face is more expressive of emotions, is more uninhibited, and displays culture-specific emotional norms. The right side of face, on the other hand, is less susceptible to cultural display norms and exhibits more universal emotional signals. Copyright 2004 IOS Press
Silvio José Lemos Vasconcellos
Full Text Available Recent studies have investigated the ability of adult psychopaths and children with psychopathy traits to identify specific facial expressions of emotion. Conclusive results have not yet been found regarding whether psychopathic traits are associated with a specific deficit in the ability of identifying negative emotions such as fear and sadness. This study compared 20 adolescents with psychopathic traits and 21 adolescents without these traits in terms of their ability to recognize facial expressions of emotion using facial stimuli presented during 200 milliseconds, 500 milliseconds, and 1 second expositions. Analyses indicated significant differences between the two groups' performances only for fear and when displayed for 200 ms. This finding is consistent with findings from other studies in the field and suggests that controlling the duration of exposure to affective stimuli in future studies may help to clarify the mechanisms underlying the facial affect recognition deficits of individuals with psychopathic traits.
3D face models accurately capture facial surfaces, making it possible for precise description of facial activities. In this paper, we present a novel mesh-based method for 3D facial expression recognition using two local shape descriptors. To characterize shape information of the local neighborhood of facial landmarks, we calculate the weighted statistical distributions of surface differential quantities, including histogram of mesh gradient (HoG) and histogram of shape index (HoS). Normal cycle theory based curvature estimation method is employed on 3D face models along with the common cubic fitting curvature estimation method for the purpose of comparison. Based on the basic fact that different expressions involve different local shape deformations, the SVM classifier with both linear and RBF kernels outperforms the state of the art results on the subset of the BU-3DFE database with the same experimental setting. © 2011 Springer-Verlag.
Buimer, Hendrik P; Bittner, Marian; Kostelijk, Tjerk; van der Geest, Thea M; Nemri, Abdellatif; van Wezel, Richard J A; Zhao, Yan
In face-to-face social interactions, blind and visually impaired persons (VIPs) lack access to nonverbal cues like facial expressions, body posture, and gestures, which may lead to impaired interpersonal communication. In this study, a wearable sensory substitution device (SSD) consisting of a head mounted camera and a haptic belt was evaluated to determine whether vibrotactile cues around the waist could be used to convey facial expressions to users and whether such a device is desired by VIPs for use in daily living situations. Ten VIPs (mean age: 38.8, SD: 14.4) and 10 sighted persons (SPs) (mean age: 44.5, SD: 19.6) participated in the study, in which validated sets of pictures, silent videos, and videos with audio of facial expressions were presented to the participant. A control measurement was first performed to determine how accurately participants could identify facial expressions while relying on their functional senses. After a short training, participants were asked to determine facial expressions while wearing the emotion feedback system. VIPs using the device showed significant improvements in their ability to determine which facial expressions were shown. A significant increase in accuracy of 44.4% was found across all types of stimuli when comparing the scores of the control (mean±SEM: 35.0±2.5%) and supported (mean±SEM: 79.4±2.1%) phases. The greatest improvements achieved with the support of the SSD were found for silent stimuli (68.3% for pictures and 50.8% for silent videos). SPs also showed consistent, though not statistically significant, improvements while supported. Overall, our study shows that vibrotactile cues are well suited to convey facial expressions to VIPs in real-time. Participants became skilled with the device after a short training session. Further testing and development of the SSD is required to improve its accuracy and aesthetics for potential daily use.
Calvo, Manuel G; Nummenmaa, Lauri
Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.
Full Text Available To perceive facial expressions is suggested to be universal. However, studies have shown the in-group advantage (IGA in recognition of facial expressions (e.g., Matsumoto, 1989, 1992 which is that people understand emotions more accurately when these emotions are expressed by members of their own culture group. A balanced design was used to investigate whether this IGA was showed in Western people and as well as in Asian people (Taiwanese. An emotional identification task was adopted to ask participants to identify positive (happy and negative (sadness, fear, and anger faces among Eastern and Western faces. We used Eastern faces from the Taiwanese Facial Expression Image Database (Chen, 2007 and Western faces from Ekman & Frisen (1979. Both reaction times and accuracies of performance were measured. Results showed that even all participants can identify positive and negative faces accurately; Asia participants responded significantly faster to negative Eastern faces than to negative Western faces. The similar IGA effect was also shown in Western participants. However, no such culture difference was found to positive faces. The results revealed the in-group advantage of the perception of facial expressions was specific to negative emotions and question the universality of perceiving facial expressions.
Jack, Rachael E; Caldara, Roberto; Schyns, Philippe G
Facial expressions have long been considered the "universal language of emotion." Yet consistent cultural differences in the recognition of facial expressions contradict such notions (e.g., R. E. Jack, C. Blais, C. Scheepers, P. G. Schyns, & R. Caldara, 2009). Rather, culture--as an intricate system of social concepts and beliefs--could generate different expectations (i.e., internal representations) of facial expression signals. To investigate, they used a powerful psychophysical technique (reverse correlation) to estimate the observer-specific internal representations of the 6 basic facial expressions of emotion (i.e., happy, surprise, fear, disgust, anger, and sad) in two culturally distinct groups (i.e., Western Caucasian [WC] and East Asian [EA]). Using complementary statistical image analyses, cultural specificity was directly revealed in these representations. Specifically, whereas WC internal representations predominantly featured the eyebrows and mouth, EA internal representations showed a preference for expressive information in the eye region. Closer inspection of the EA observer preference revealed a surprising feature: changes of gaze direction, shown primarily among the EA group. For the first time, it is revealed directly that culture can finely shape the internal representations of common facial expressions of emotion, challenging notions of a biologically hardwired "universal language of emotion."
Full Text Available Abstract. Background. Facial expressions of emotions represent classic stimuli for the study of social cognition. Developing virtual dynamic facial expressions of emotions, however, would open-up possibilities, both for fundamental and clinical research. For instance, virtual faces allow real-time Human-Computer retroactions between physiological measures and the virtual agent. Objectives. The goal of this study was to initially assess concomitant and construct validity of a newly developed set of virtual faces expressing 6 fundamental emotions (happiness, surprise, anger, sadness, fear, or disgust. Recognition rates, facial electromyography (zygomatic major and corrugator supercilii muscles, and regional gaze fixation latencies (eyes and mouth regions were compared in 41 adult volunteers (20 ♂, 21 ♀ during the presentation of video clips depicting real vs. virtual adults expressing emotions. Results. Emotions expressed by each sets of stimuli were similarly recognized, both by men and women. Accordingly, both sets of stimuli elicited similar activation of facial muscles and similar ocular fixation times in eye regions from man and woman participants. Conclusion. Further validation studies can be performed with these virtual faces among clinical populations known to present social cognition difficulties. Brain-Computer Interface studies with feedback-feed forward interactions based on facial emotion expressions can also be conducted with these stimuli.
Greening, Steven G; Mitchell, Derek G V; Smith, Fraser W
A network of cortical and sub-cortical regions is known to be important in the processing of facial expression. However, to date no study has investigated whether representations of facial expressions present in this network permit generalization across independent samples of face information (e.g., eye region vs mouth region). We presented participants with partial face samples of five expression categories in a rapid event-related fMRI experiment. We reveal a network of face-sensitive regions that contain information about facial expression categories regardless of which part of the face is presented. We further reveal that the neural information present in a subset of these regions: dorsal prefrontal cortex (dPFC), superior temporal sulcus (STS), lateral occipital and ventral temporal cortex, and even early visual cortex, enables reliable generalization across independent visual inputs (faces depicting the 'eyes only' vs 'eyes removed'). Furthermore, classification performance was correlated to behavioral performance in STS and dPFC. Our results demonstrate that both higher (e.g., STS, dPFC) and lower level cortical regions contain information useful for facial expression decoding that go beyond the visual information presented, and implicate a key role for contextual mechanisms such as cortical feedback in facial expression perception under challenging conditions of visual occlusion. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sfärlea, Anca; Greimel, Ellen; Platt, Belinda; Dieler, Alica C; Schulte-Körne, Gerd
Anorexia nervosa (AN) has been suggested to be associated with abnormalities in facial emotion recognition. Most prior studies on facial emotion recognition in AN have investigated adult samples, despite the onset of AN being particularly often during adolescence. In addition, few studies have examined whether impairments in facial emotion recognition are specific to AN or might be explained by frequent comorbid conditions that are also associated with deficits in emotion recognition, such as depression. The present study addressed these gaps by investigating recognition of emotional facial expressions in adolescent girls with AN (n = 26) compared to girls with major depression (MD; n = 26) and healthy girls (HC; n = 37). Participants completed one task requiring identification of emotions (happy, sad, afraid, angry, neutral) in faces and two control tasks. Neither of the clinical groups showed impairments. The AN group was more accurate than the HC group in recognising afraid facial expressions and more accurate than the MD group in recognising happy, sad, and afraid expressions. Misclassification analyses identified subtle group differences in the types of errors made. The results suggest that the deficits in facial emotion recognition found in adult AN samples are not present in adolescent patients. Copyright © 2017 Elsevier B.V. All rights reserved.
Balconi, Michela; Vanutelli, Maria Elide; Finocchiaro, Roberta
Background The paper explored emotion comprehension in children with regard to facial expression of emotion. The effect of valence and arousal evaluation, of context and of psychophysiological measures was monitored. Indeed subjective evaluation of valence (positive vs. negative) and arousal (high vs. low), and contextual (facial expression vs. facial expression and script) variables were supposed to modulate the psychophysiological responses. Methods Self-report measures (in terms of correct...
Full Text Available One of the crucial features defining basic emotions and their prototypical facial expressions is their value for survival. Childhood traumatic experiences affect the effective recognition of facial expressions of negative emotions, normally allowing the recruitment of adequate behavioral responses to environmental threats. Specifically, anger becomes an extraordinarily salient stimulus unbalancing victims’ recognition of negative emotions. Despite the plethora of studies on this topic, to date, it is not clear whether this phenomenon reflects an overall response tendency toward anger recognition or a selective proneness to the salience of specific facial expressive cues of anger after trauma exposure. To address this issue, a group of underage Sierra Leonean Ebola virus disease survivors (mean age 15.40 years, SE 0.35; years of schooling 8.8 years, SE 0.46; 14 males and a control group (mean age 14.55, SE 0.30; years of schooling 8.07 years, SE 0.30, 15 males performed a forced-choice chimeric facial expressions recognition task. The chimeric facial expressions were obtained pairing upper and lower half faces of two different negative emotions (selected from anger, fear and sadness for a total of six different combinations. Overall, results showed that upper facial expressive cues were more salient than lower facial expressive cues. This priority was lost among Ebola virus disease survivors for the chimeric facial expressions of anger. In this case, differently from controls, Ebola virus disease survivors recognized anger regardless of the upper or lower position of the facial expressive cues of this emotion. The present results demonstrate that victims’ performance in the recognition of the facial expression of anger does not reflect an overall response tendency toward anger recognition, but rather the specific greater salience of facial expressive cues of anger. Furthermore, the present results show that traumatic experiences deeply modify
Deska, Jason C; Lloyd, E Paige; Hugenberg, Kurt
The ability to rapidly and accurately decode facial expressions is adaptive for human sociality. Although judgments of emotion are primarily determined by musculature, static face structure can also impact emotion judgments. The current work investigates how facial width-to-height ratio (fWHR), a stable feature of all faces, influences perceivers' judgments of expressive displays of anger and fear (Studies 1a, 1b, & 2), and anger and happiness (Study 3). Across 4 studies, we provide evidence consistent with the hypothesis that perceivers more readily see anger on faces with high fWHR compared with those with low fWHR, which instead facilitates the recognition of fear and happiness. This bias emerges when participants are led to believe that targets displaying otherwise neutral faces are attempting to mask an emotion (Studies 1a & 1b), and is evident when faces display an emotion (Studies 2 & 3). Together, these studies suggest that target facial width-to-height ratio biases ascriptions of emotion with consequences for emotion recognition speed and accuracy. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Roy P. C. Kessels
Full Text Available Behavioural problems are a key feature of frontotemporal lobar degeneration (FTLD. Also, FTLD patients show impairments in emotion processing. Specifically, the perception of negative emotional facial expressions is affected. Generally, however, negative emotional expressions are regarded as more difficult to recognize than positive ones, which thus may have been a confounding factor in previous studies. Also, ceiling effects are often present on emotion recognition tasks using full-blown emotional facial expressions. In the present study with FTLD patients, we examined the perception of sadness, anger, fear, happiness, surprise and disgust at different emotional intensities on morphed facial expressions to take task difficulty into account. Results showed that our FTLD patients were specifically impaired at the recognition of the emotion anger. Also, the patients performed worse than the controls on recognition of surprise, but performed at control levels on disgust, happiness, sadness and fear. These findings corroborate and extend previous results showing deficits in emotion perception in FTLD.
Beatrice eDe Gelder
Full Text Available There are many ways to assess face perception skills. In this study, we describe a novel task battery FEAST (Facial Expression Action Stimulus Test developed to test recognition of identity and expressions of human faces as well as stimulus control categories. The FEAST consists of a neutral and emotional face memory task, a face and object identity matching task, a face and house part-to-whole matching task, and a human and animal facial expression matching task. The identity and part-to-whole matching tasks contain both upright and inverted conditions. The results provide reference data of a healthy sample of controls in two age groups for future users of the FEAST.
Rauch Scott L
Full Text Available Abstract Background The amygdala habituates in response to repeated human facial expressions; however, it is unclear whether this brain region habituates to schematic faces (i.e., simple line drawings or caricatures of faces. Using an fMRI block design, 16 healthy participants passively viewed repeated presentations of schematic and human neutral and negative facial expressions. Percent signal changes within anatomic regions-of-interest (amygdala and fusiform gyrus were calculated to examine the temporal dynamics of neural response and any response differences based on face type. Results The amygdala and fusiform gyrus had a within-run "U" response pattern of activity to facial expression blocks. The initial block within each run elicited the greatest activation (relative to baseline and the final block elicited greater activation than the preceding block. No significant differences between schematic and human faces were detected in the amygdala or fusiform gyrus. Conclusion The "U" pattern of response in the amygdala and fusiform gyrus to facial expressions suggests an initial orienting, habituation, and activation recovery in these regions. Furthermore, this study is the first to directly compare brain responses to schematic and human facial expressions, and the similarity in brain responses suggest that schematic faces may be useful in studying amygdala activation.
Descovich, Kris A; Wathan, Jennifer; Leach, Matthew C; Buchanan-Smith, Hannah M; Flecknell, Paul; Farningham, David; Vick, Sarah-Jane
Animal welfare is a key issue for industries that use or impact upon animals. The accurate identification of welfare states is particularly relevant to the field of bioscience, where the 3Rs framework encourages refinement of experimental procedures involving animal models. The assessment and improvement of welfare states in animals depends on reliable and valid measurement tools. Behavioral measures (activity, attention, posture and vocalization) are frequently used because they are immediate and non-invasive, however no single indicator can yield a complete picture of the internal state of an animal. Facial expressions are extensively studied in humans as a measure of psychological and emotional experiences but are infrequently used in animal studies, with the exception of emerging research on pain behavior. In this review, we discuss current evidence for facial representations of underlying affective states, and how communicative or functional expressions can be useful within welfare assessments. Validated tools for measuring facial movement are outlined, and the potential of expressions as honest signals is discussed, alongside other challenges and limitations to facial expression measurement within the context of animal welfare. We conclude that facial expression determination in animals is a useful but underutilized measure that complements existing tools in the assessment of welfare.
Chen, Zhongting; Poon, Kai-Tak; Cheng, Cecilia
Studies have examined social maladjustment among individuals with Internet addiction, but little is known about their deficits in specific social skills and the underlying psychological mechanisms. The present study filled these gaps by (a) establishing a relationship between deficits in facial expression recognition and Internet addiction, and (b) examining the mediating role of perceived stress that explains this hypothesized relationship. Ninety-seven participants completed validated questionnaires that assessed their levels of Internet addiction and perceived stress, and performed a computer-based task that measured their facial expression recognition. The results revealed a positive relationship between deficits in recognizing disgust facial expression and Internet addiction, and this relationship was mediated by perceived stress. However, the same findings did not apply to other facial expressions. Ad hoc analyses showed that recognizing disgust was more difficult than recognizing other facial expressions, reflecting that the former task assesses a social skill that requires cognitive astuteness. The present findings contribute to the literature by identifying a specific social skill deficit related to Internet addiction and by unveiling a psychological mechanism that explains this relationship, thus providing more concrete guidelines for practitioners to strengthen specific social skills that mitigate both perceived stress and Internet addiction. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Full Text Available Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a "reactivation" of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG--in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110 in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective.
Livingstone, Steven R; Thompson, William F; Wanderley, Marcelo M; Palmer, Caroline
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech-song differences. Vocalists' jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech-song. Vocalists' emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists' facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.
Künecke, Janina; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Wilhelm, Oliver
Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a "reactivation" of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG)--in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110) in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective.
Humphreys, Kathryn L; Kircanski, Katharina; Colich, Natalie L; Gotlib, Ian H
Early life stress is associated with poorer social functioning. Attentional biases in response to threat-related cues, linked to both early experience and psychopathology, may explain this association. To date, however, no study has examined attentional biases to fearful facial expressions as a function of early life stress or examined these biases as a potential mediator of the relation between early life stress and social problems. In a sample of 154 children (ages 9-13 years) we examined the associations among interpersonal early life stressors (i.e., birth through age 6 years), attentional biases to emotional facial expressions using a dot-probe task, and social functioning on the Child Behavior Checklist. High levels of early life stress were associated with both greater levels of social problems and an attentional bias away from fearful facial expressions, even after accounting for stressors occurring in later childhood. No biases were found for happy or sad facial expressions as a function of early life stress. Finally, attentional biases to fearful faces mediated the association between early life stress and social problems. Attentional avoidance of fearful facial expressions, evidenced by a bias away from these stimuli, may be a developmental response to early adversity and link the experience of early life stress to poorer social functioning. © 2016 Association for Child and Adolescent Mental Health.
Full Text Available In this paper, we present technical approaches that bridge the gap in the research related to the use of brain-computer interfaces for entertainment and facial expressions. Such facial expressions that reflect an individual's personal traits can be used to better realize artificial facial expressions in a gaming environment based on a brain-computer interface. First, an emotion extraction filter is introduced in order to classify emotions on the basis of the users' brain signals in real time. Next, a personality trait filter is defined to classify extrovert and introvert types, which manifest as five traits: very extrovert, extrovert, medium, introvert and very introvert. In addition, facial expressions derived from expression rates are obtained by an extrovert-introvert fuzzy model through its defuzzification process. Finally, we confirm this validation via an analysis of the variance of the personality trait filter, a k-fold cross validation of the emotion extraction filter, an accuracy analysis, a user study of facial synthesis and a test case game.
Qu, Fangbing; Yan, Wen-Jing; Chen, Yu-Hsin; Li, Kaiyun; Zhang, Hui; Fu, Xiaolan
The awareness of facial expressions allows one to better understand, predict, and regulate his/her states to adapt to different social situations. The present research investigated individuals' awareness of their own facial expressions and the influence of the duration and intensity of expressions in two self-reference modalities, a real-time condition and a video-review condition. The participants were instructed to respond as soon as they became aware of any facial movements. The results revealed that awareness rates were 57.79% in the real-time condition and 75.92% in the video-review condition. The awareness rate was influenced by the intensity and (or) the duration. The intensity thresholds for individuals to become aware of their own facial expressions were calculated using logistic regression models. The results of Generalized Estimating Equations (GEE) revealed that video-review awareness was a significant predictor of real-time awareness. These findings extend understandings of human facial expression self-awareness in two modalities.
Zanette, Sarah; Gao, Xiaoqing; Brunet, Megan; Bartlett, Marian Stewart; Lee, Kang
The current study used computer vision technology to examine the nonverbal facial expressions of children (6-11years old) telling antisocial and prosocial lies. Children in the antisocial lying group completed a temptation resistance paradigm where they were asked not to peek at a gift being wrapped for them. All children peeked at the gift and subsequently lied about their behavior. Children in the prosocial lying group were given an undesirable gift and asked if they liked it. All children lied about liking the gift. Nonverbal behavior was analyzed using the Computer Expression Recognition Toolbox (CERT), which employs the Facial Action Coding System (FACS), to automatically code children's facial expressions while lying. Using CERT, children's facial expressions during antisocial and prosocial lying were accurately and reliably differentiated significantly above chance-level accuracy. The basic expressions of emotion that distinguished antisocial lies from prosocial lies were joy and contempt. Children expressed joy more in prosocial lying than in antisocial lying. Girls showed more joy and less contempt compared with boys when they told prosocial lies. Boys showed more contempt when they told prosocial lies than when they told antisocial lies. The key action units (AUs) that differentiate children's antisocial and prosocial lies are blink/eye closure, lip pucker, and lip raise on the right side. Together, these findings indicate that children's facial expressions differ while telling antisocial versus prosocial lies. The reliability of CERT in detecting such differences in facial expression suggests the viability of using computer vision technology in deception research. Copyright © 2016 Elsevier Inc. All rights reserved.
Montirosso, Rosario; Peverelli, Milena; Frigerio, Elisa; Crespi, Monica; Borgatti, Renato
The primary purpose of this study was to examine the effect of the intensity of emotion expression on children's developing ability to label emotion during a dynamic presentation of five facial expressions (anger, disgust, fear, happiness, and sadness). A computerized task (AFFECT--animated full facial expression comprehension test) was used to…
Siddiqi, Muhammad Hameed; Ali, Maqbool; Idris, Muhammad; Banos Legran, Oresti; Lee, Sungyoung; Choo, Hyunseung
One limitation seen among most of the previous methods is that they were evaluated under settings that are far from real-life scenarios. The reason is that the existing facial expression recognition (FER) datasets are mostly pose-based and assume a predefined setup. The expressions in these datasets
Camras, Linda A.; And Others
European American, Japanese, and Chinese 11-month-olds participated in emotion-inducing laboratory procedures. Facial responses were scored with BabyFACS, an anatomically based coding system. Overall, Chinese infants were less expressive than European American and Japanese infants, suggesting that differences in expressivity between European…
aan het Rot, Marije; Enea, Violeta; Dafinoiu, Ion; Iancu, Sorina; Taftă, Steluţa A; Bărbuşelu, Mariana
While the recognition of emotional expressions has been extensively studied, the behavioural response to these expressions has not. In the interpersonal circumplex, behaviour is defined in terms of communion and agency. In this study, we examined behavioural responses to both facial and postural
Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian
Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based. PsycINFO Database Record (c) 2013 APA, all rights reserved.
van der Schalk, J.; Hawk, S.T.; Fischer, A.H.; Doosje, B.
We report two studies validating a new standardized set of filmed emotion expressions, the Amsterdam Dynamic Facial Expression Set (ADFES). The ADFES is distinct from existing datasets in that it includes a face-forward version and two different head-turning versions (faces turning toward and away
Fang, X.; Sauter, D.A.; van Kleef, G.A.
Although perceivers often agree about the primary emotion that is conveyed by a particular expression, observers may concurrently perceive several additional emotions from a given facial expression. In the present research, we compared the perception of two types of nonintended emotions in Chinese
Buss, Kristin A.; Kiel, Elizabeth J.
Research suggests that sadness expressions may be more beneficial to children than other emotions when eliciting support from caregivers. It is unclear, however, when children develop the ability to regulate their displays of distress. The current study addressed this question. Distress facial expressions (e.g., fear, anger, and sadness) were…
Nijholt, Antinus; Poppe, Ronald Walter; Unknown, [Unknown
In this workshop of the 8th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2008), the emphasis is on research on facial and bodily expressions for the control and adaptation of games. We distinguish between two forms of expressions, depending on whether the user has the
Rigato, Silvia; Menon, Enrica; Di Gangi, Valentina; George, Nathalie; Farroni, Teresa
Faces convey many signals (i.e., gaze or expressions) essential for interpersonal interaction. We have previously shown that facial expressions of emotion and gaze direction are processed and integrated in specific combinations early in life. These findings open a number of developmental questions and specifically in this paper we address whether…
aan het Rot, Marije; Enea, Violeta; Dafinoiu, Ion; Iancu, Sorina; Taftă, Steluţa A; Bărbuşelu, Mariana
While the recognition of emotional expressions has been extensively studied, the behavioural response to these expressions has not. In the interpersonal circumplex, behaviour is defined in terms of communion and agency. In this study, we examined behavioural responses to both facial and postural
Virji-Babul, Naznin; Watt, Kimberley; Nathoo, Farouk; Johnson, Peter
Research on facial expressions in individuals with Down syndrome (DS) has been conducted using photographs. Our goal was to examine the effect of motion on perception of emotional expressions. Adults with DS, adults with typical development matched for chronological age (CA), and children with typical development matched for developmental age (DA)…
Walsh, Jennifer A.; Vida, Mark D.; Rutherford, M. D.
Rutherford and McIntosh (J Autism Dev Disord 37:187-196, 2007) demonstrated that individuals with autism spectrum disorder (ASD) are more tolerant than controls of exaggerated schematic facial expressions, suggesting that they may use an alternative strategy when processing emotional expressions. The current study was designed to test this finding…
Regenbogen, Christina; Schneider, Daniel A; Gur, Raquel E; Schneider, Frank; Habel, Ute; Kellermann, Thilo
Human communication is based on a dynamic information exchange of the communication channels facial expressions, prosody, and speech content. This fMRI study elucidated the impact of multimodal emotion processing and the specific contribution of each channel on behavioral empathy and its prerequisites. Ninety-six video clips displaying actors who told self-related stories were presented to 27 healthy participants. In two conditions, all channels uniformly transported only emotional or neutral information. Three conditions selectively presented two emotional channels and one neutral channel. Subjects indicated the actors' emotional valence and their own while fMRI was recorded. Activation patterns of tri-channel emotional communication reflected multimodal processing and facilitative effects for empathy. Accordingly, subjects' behavioral empathy rates significantly deteriorated once one source was neutral. However, emotionality expressed via two of three channels yielded activation in a network associated with theory-of-mind-processes. This suggested participants' effort to infer mental states of their counterparts and was accompanied by a decline of behavioral empathy, driven by the participants' emotional responses. Channel-specific emotional contributions were present in modality-specific areas. The identification of different network-nodes associated with human interactions constitutes a prerequisite for understanding dynamics that underlie multimodal integration and explain the observed decline in empathy rates. This task might also shed light on behavioral deficits and neural changes that accompany psychiatric diseases. Copyright © 2012 Elsevier Inc. All rights reserved.
Belham, Flávia Schechtman; Tavares, Maria Clotilde H; Satler, Corina; Garcia, Ana; Rodrigues, Rosângela C; Canabarro, Soraya L de Sá; Tomaz, Carlos
Many studies have investigated the influence of emotion on memory processes across the human lifespan. Some results have shown older adults (OA) performing better with positive stimuli, some with negative items, whereas some found no impact of emotional valence. Here we tested, in two independent studies, how younger adults (YA) and OA would perform in a visuospatial working memory (VSWM) task with positive, negative, and neutral images. The task consisted of identifying the new location of a stimulus in a crescent set of identical stimuli presented in different locations in a touch-screen monitor. In other words, participants should memorize the locations previously occupied to identify the new location. For each trial, the number of occupied locations increased until 8 or until a mistake was made. In study 1, 56 YA and 38 OA completed the task using images from the International Affective Picture System (IAPS). Results showed that, although YA outperformed OA, no effects of emotion were found. In study 2, 26 YA and 25 OA were tested using facial expressions as stimuli. Data from this study showed that negative faces facilitated performance and this effect did not differ between age groups. No differences were found between men and women. Taken together, our findings suggest that YA and OA's VSWM can be influenced by the emotional valence of the information, though this effect was present only for facial stimuli. Presumably, this may have happened due to the social and biological importance of such stimuli, which are more effective in transmitting emotions than IAPS images. Critically, our results also indicate that the mixed findings in the literature about the influence of aging on the interactions between memory and emotion may be caused by the use of different stimuli and methods. This possibility should be kept in mind in future studies about memory and emotion across the lifespan.
Full Text Available Internet Gaming Disorder (IGD is characterized by impairments in social communication and the avoidance of social contact. Facial expression processing is the basis of social communication. However, few studies have investigated how individuals with IGD process facial expressions, and whether they have deficits in emotional facial processing remains unclear. The aim of the present study was to explore these two issues by investigating the time course of emotional facial processing in individuals with IGD. A backward masking task was used to investigate the differences between individuals with IGD and normal controls (NC in the processing of subliminally presented facial expressions (sad, happy, and neutral with event-related potentials (ERPs. The behavioral results showed that individuals with IGD are slower than NC in response to both sad and neutral expressions in the sad–neutral context. The ERP results showed that individuals with IGD exhibit decreased amplitudes in ERP component N170 (an index of early face processing in response to neutral expressions compared to happy expressions in the happy–neutral expressions context, which might be due to their expectancies for positive emotional content. The NC, on the other hand, exhibited comparable N170 amplitudes in response to both happy and neutral expressions in the happy–neutral expressions context, as well as sad and neutral expressions in the sad–neutral expressions context. Both individuals with IGD and NC showed comparable ERP amplitudes during the processing of sad expressions and neutral expressions. The present study revealed that individuals with IGD have different unconscious neutral facial processing patterns compared with normal individuals and suggested that individuals with IGD may expect more positive emotion in the happy–neutral expressions context.Highlights:• The present study investigated whether the unconscious processing of facial expressions is influenced by
Peng, Xiaozhe; Cui, Fang; Wang, Ting; Jiao, Can
Internet Gaming Disorder (IGD) is characterized by impairments in social communication and the avoidance of social contact. Facial expression processing is the basis of social communication. However, few studies have investigated how individuals with IGD process facial expressions, and whether they have deficits in emotional facial processing remains unclear. The aim of the present study was to explore these two issues by investigating the time course of emotional facial processing in individuals with IGD. A backward masking task was used to investigate the differences between individuals with IGD and normal controls (NC) in the processing of subliminally presented facial expressions (sad, happy, and neutral) with event-related potentials (ERPs). The behavioral results showed that individuals with IGD are slower than NC in response to both sad and neutral expressions in the sad-neutral context. The ERP results showed that individuals with IGD exhibit decreased amplitudes in ERP component N170 (an index of early face processing) in response to neutral expressions compared to happy expressions in the happy-neutral expressions context, which might be due to their expectancies for positive emotional content. The NC, on the other hand, exhibited comparable N170 amplitudes in response to both happy and neutral expressions in the happy-neutral expressions context, as well as sad and neutral expressions in the sad-neutral expressions context. Both individuals with IGD and NC showed comparable ERP amplitudes during the processing of sad expressions and neutral expressions. The present study revealed that individuals with IGD have different unconscious neutral facial processing patterns compared with normal individuals and suggested that individuals with IGD may expect more positive emotion in the happy-neutral expressions context. • The present study investigated whether the unconscious processing of facial expressions is influenced by excessive online gaming. A validated
Soussignan, Robert; Dollion, Nicolas; Schaal, Benoist; Durand, Karine; Reissland, Nadja; Baudouin, Jean-Yves
While there is an extensive literature on the tendency to mimic emotional expressions in adults, it is unclear how this skill emerges and develops over time. Specifically, it is unclear whether infants mimic discrete emotion-related facial actions, whether their facial displays are moderated by contextual cues and whether infants' emotional mimicry is constrained by developmental changes in the ability to discriminate emotions. We therefore investigate these questions using Baby-FACS to code infants' facial displays and eye-movement tracking to examine infants' looking times at facial expressions. Three-, 7-, and 12-month-old participants were exposed to dynamic facial expressions (joy, anger, fear, disgust, sadness) of a virtual model which either looked at the infant or had an averted gaze. Infants did not match emotion-specific facial actions shown by the model, but they produced valence-congruent facial responses to the distinct expressions. Furthermore, only the 7- and 12-month-olds displayed negative responses to the model's negative expressions and they looked more at areas of the face recruiting facial actions involved in specific expressions. Our results suggest that valence-congruent expressions emerge in infancy during a period where the decoding of facial expressions becomes increasingly sensitive to the social signal value of emotions.
Full Text Available Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs. We primed infants with body postures (fearful, happy that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception.
Rajhans, Purva; Jessen, Sarah; Missana, Manuela; Grossmann, Tobias
Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs). We primed infants with body postures (fearful, happy) that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Chao, Xiuhua; Xu, Lei; Li, Jianfeng; Han, Yuechen; Li, Xiaofei; Mao, YanYan; Shang, Haiqiong; Fan, Zhaomin; Wang, Haibo
Conclusion C/GP hydrogel was demonstrated to be an ideal drug delivery vehicle and scaffold in the vein conduit. Combined use autologous vein and NGF continuously delivered by C/GP-NGF hydrogel can improve the recovery of facial nerve defects. Objective This study investigated the effects of chitosan-β-glycerophosphate-nerve growth factor (C/GP-NGF) hydrogel combined with autologous vein conduit on the recovery of damaged facial nerve in a rat model. Methods A 5 mm gap in the buccal branch of a rat facial nerve was reconstructed with an autologous vein. Next, C/GP-NGF hydrogel was injected into the vein conduit. In negative control groups, NGF solution or phosphate-buffered saline (PBS) was injected into the vein conduits, respectively. Autologous implantation was used as a positive control group. Vibrissae movement, electrophysiological assessment, and morphological analysis of regenerated nerves were performed to assess nerve regeneration. Results NGF continuously released from C/GP-NGF hydrogel in vitro. The recovery rate of vibrissae movement and the compound muscle action potentials of regenerated facial nerve in the C/GP-NGF group were similar to those in the Auto group, and significantly better than those in the NGF group. Furthermore, larger regenerated axons and thicker myelin sheaths were obtained in the C/GP-NGF group than those in the NGF group.
Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin
The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots. © 2015 Wiley Periodicals, Inc.
Camras, L A; Oster, H; Campos, J; Campos, R; Ujiie, T; Miyake, K; Wang, L; Meng, Z
European American, Japanese, and Chinese 11-month-olds participated in emotion-inducing laboratory procedures. Facial responses were scored with BabyFACS, an anatomically based coding system. Overall, Chinese infants were less expressive than European American and Japanese infants. On measures of smiling and crying, Chinese infants scored lower than European American infants, whereas Japanese infants were similar to the European American infants or fell between the two other groups. Results suggest that differences in expressivity between European American and Chinese infants are more robust than those between European American and Japanese infants and that Chinese and Japanese infants can differ significantly. Cross-cultural differences were also found for some specific brow, cheek, and midface facial actions (e.g., brows lowered). These are discussed in terms of current controversies about infant affective facial expressions.
Full Text Available Emotional and social information can sway otherwise rational decisions. For example, when participants decide between two faces that are probabilistically rewarded, they make biased choices that favor smiling relative to angry faces. This bias may arise because facial expressions evoke positive and negative emotional responses, which in turn may motivate social approach and avoidance. We tested a wide range of pictures that evoke emotions or convey social information, including animals, words, foods, a variety of scenes, and faces differing in trustworthiness or attractiveness, but we found only facial expressions biased decisions. Our results extend brain imaging and pharmacological findings, which suggest that a brain mechanism supporting social interaction may be involved. Facial expressions appear to exert special influence over this social interaction mechanism, one capable of biasing otherwise rational choices. These results illustrate that only specific types of emotional experiences can best sway our choices.
Furl, Nicholas; Gallagher, Shannon; Averbeck, Bruno B.
Emotional and social information can sway otherwise rational decisions. For example, when participants decide between two faces that are probabilistically rewarded, they make biased choices that favor smiling relative to angry faces. This bias may arise because facial expressions evoke positive and negative emotional responses, which in turn may motivate social approach and avoidance. We tested a wide range of pictures that evoke emotions or convey social information, including animals, words, foods, a variety of scenes, and faces differing in trustworthiness or attractiveness, but we found only facial expressions biased decisions. Our results extend brain imaging and pharmacological findings, which suggest that a brain mechanism supporting social interaction may be involved. Facial expressions appear to exert special influence over this social interaction mechanism, one capable of biasing otherwise rational choices. These results illustrate that only specific types of emotional experiences can best sway our choices. PMID:22438936
Horning, Sheena M; Cornwell, R Elisabeth; Davis, Hasker P
The present study aimed to investigate changes in facial expression recognition across the lifespan, as well as to determine the influence of fluid intelligence, processing speed, and memory on this ability. Peak performance in the ability to identify facial affect was found to occur in middle-age, with the children and older adults performing the poorest. Specifically, older adults were impaired in their ability to identify fear, sadness, and happiness, but had preserved recognition of anger, disgust, and surprise. Analyses investigating the influence of cognition on emotion recognition demonstrated that cognitive abilities contribute to performance, especially for participants over age 45. However, the cognitive functions did not fully account for the older adults' impairments on expression recognition. Overall, the age-related deficits in facial expression recognition have implications for older adults' use of non-verbal communicative information.
Scherer, Klaus R; Ellgring, Heiner
The different assumptions made by discrete and componential emotion theories about the nature of the facial expression of emotion and the underlying mechanisms are reviewed. Explicit and implicit predictions are derived from each model. It is argued that experimental expression-production paradigms rather than recognition studies are required to critically test these differential predictions. Data from a large-scale actor portrayal study are reported to demonstrate the utility of this approach. The frequencies with which 12 professional actors use major facial muscle actions individually and in combination to express 14 major emotions show little evidence for emotion-specific prototypical affect programs. Rather, the results encourage empirical investigation of componential emotion model predictions of dynamic configurations of appraisal-driven adaptive facial actions. (c) 2007 APA, all rights reserved.
Ahmad Hoirul Basori
Full Text Available he process of introducing emotion can be improved through three-dimensional (3D tutoring system. The problem that still not solved is how to provide realistic tutor (avatar in virtual environment. This paper propose an approach to teach children on understanding emotion sensation through facial expression and sense of touch (haptic.The algorithm is created by calculating constant factor (f based on maximum value of RGB and magnitude force then magnitude force range will be associated into particular colour. The Integration process will be started from rendering the facial expression then followed by adjusting the vibration power to emotion value. The result that achieved on experiment, it show around 71% students agree with the classification of magnitude force into emotion representation. Respondents commented that high magnitude force create similar sensation when respondents feel anger, while low magnitude force is more relaxing to respondents. Respondents also said that haptic and facial expression is very interactive and realistic.
Furl, Nicholas; Gallagher, Shannon; Averbeck, Bruno B
Emotional and social information can sway otherwise rational decisions. For example, when participants decide between two faces that are probabilistically rewarded, they make biased choices that favor smiling relative to angry faces. This bias may arise because facial expressions evoke positive and negative emotional responses, which in turn may motivate social approach and avoidance. We tested a wide range of pictures that evoke emotions or convey social information, including animals, words, foods, a variety of scenes, and faces differing in trustworthiness or attractiveness, but we found only facial expressions biased decisions. Our results extend brain imaging and pharmacological findings, which suggest that a brain mechanism supporting social interaction may be involved. Facial expressions appear to exert special influence over this social interaction mechanism, one capable of biasing otherwise rational choices. These results illustrate that only specific types of emotional experiences can best sway our choices.
Liu, Yanling; Lan, Haiying; Teng, Zhaojun; Guo, Cheng; Yao, Dezhong
Previous research has been inconsistent on whether violent video games exert positive and/or negative effects on cognition. In particular, attentional bias in facial affect processing after violent video game exposure continues to be controversial. The aim of the present study was to investigate attentional bias in facial recognition after short term exposure to violent video games and to characterize the neural correlates of this effect. In order to accomplish this, participants were exposed to either neutral or violent video games for 25 min and then event-related potentials (ERPs) were recorded during two emotional search tasks. The first search task assessed attentional facilitation, in which participants were required to identify an emotional face from a crowd of neutral faces. In contrast, the second task measured disengagement, in which participants were required to identify a neutral face from a crowd of emotional faces. Our results found a significant presence of the ERP component, N2pc, during the facilitation task; however, no differences were observed between the two video game groups. This finding does not support a link between attentional facilitation and violent video game exposure. Comparatively, during the disengagement task, N2pc responses were not observed when participants viewed happy faces following violent video game exposure; however, a weak N2pc response was observed after neutral video game exposure. These results provided only inconsistent support for the disengagement hypothesis, suggesting that participants found it difficult to separate a neutral face from a crowd of emotional faces.
Full Text Available Previous research has been inconsistent on whether violent video games exert positive and/or negative effects on cognition. In particular, attentional bias in facial affect processing after violent video game exposure continues to be controversial. The aim of the present study was to investigate attentional bias in facial recognition after short term exposure to violent video games and to characterize the neural correlates of this effect. In order to accomplish this, participants were exposed to either neutral or violent video games for 25 min and then event-related potentials (ERPs were recorded during two emotional search tasks. The first search task assessed attentional facilitation, in which participants were required to identify an emotional face from a crowd of neutral faces. In contrast, the second task measured disengagement, in which participants were required to identify a neutral face from a crowd of emotional faces. Our results found a significant presence of the ERP component, N2pc, during the facilitation task; however, no differences were observed between the two video game groups. This finding does not support a link between attentional facilitation and violent video game exposure. Comparatively, during the disengagement task, N2pc responses were not observed when participants viewed happy faces following violent video game exposure; however, a weak N2pc response was observed after neutral video game exposure. These results provided only inconsistent support for the disengagement hypothesis, suggesting that participants found it difficult to separate a neutral face from a crowd of emotional faces.
Liu, Yanling; Lan, Haiying; Teng, Zhaojun; Guo, Cheng; Yao, Dezhong
Previous research has been inconsistent on whether violent video games exert positive and/or negative effects on cognition. In particular, attentional bias in facial affect processing after violent video game exposure continues to be controversial. The aim of the present study was to investigate attentional bias in facial recognition after short term exposure to violent video games and to characterize the neural correlates of this effect. In order to accomplish this, participants were exposed to either neutral or violent video games for 25 min and then event-related potentials (ERPs) were recorded during two emotional search tasks. The first search task assessed attentional facilitation, in which participants were required to identify an emotional face from a crowd of neutral faces. In contrast, the second task measured disengagement, in which participants were required to identify a neutral face from a crowd of emotional faces. Our results found a significant presence of the ERP component, N2pc, during the facilitation task; however, no differences were observed between the two video game groups. This finding does not support a link between attentional facilitation and violent video game exposure. Comparatively, during the disengagement task, N2pc responses were not observed when participants viewed happy faces following violent video game exposure; however, a weak N2pc response was observed after neutral video game exposure. These results provided only inconsistent support for the disengagement hypothesis, suggesting that participants found it difficult to separate a neutral face from a crowd of emotional faces. PMID:28249033
Lena Rachel Quinto
Full Text Available Singing involves vocal production accompanied by a dynamic and meaningful use of facial expressions, which may serve as ancillary gestures that complement, disambiguate, or reinforce the acoustic signal. In this investigation, we examined the use of facial movements to communicate emotion, focusing on movements arising in three epochs: before vocalisation (pre-production, during vocalisation (production, and immediately after vocalisation (post-production. The stimuli were recordings of seven vocalists’ facial movements as they sang short (14 syllable melodic phrases with the intention of communicating happiness, sadness, irritation, or no emotion. Facial movements were presented as point-light displays to 16 observers who judged the emotion conveyed. Experiment 1 revealed that the accuracy of emotional judgement varied with singer, emotion and epoch. Accuracy was highest in the production epoch, however, happiness was well communicated in the pre-production epoch. In Experiment 2, observers judged point-light displays of exaggerated movements. The ratings suggested that the extent of facial and head movements is largely perceived as a gauge of emotional arousal. In Experiment 3, observers rated point-light displays of scrambled movements. Configural information was removed in these stimuli but velocity and acceleration were retained. Exaggerated scrambled movements were likely to be associated with happiness or irritation whereas unexaggerated scrambled movements were more likely to be identified as neutral. An analysis of the motions of singers revealed systematic changes in facial movement as a function of the emotional intentions of singers. The findings confirm the central role of facial expressions in vocal emotional communication, and highlight individual differences between singers in the amount and intelligibility of facial movements made before, during, and after vocalization.
Full Text Available This paper proposes novel framework for facial expressions analysis using dynamic and static information in video sequences. First, based on incremental formulation, discriminative deformable face alignment method is adapted to locate facial points to correct in-plane head rotation and break up facial region from background. Then, spatial-temporal motion local binary pattern (LBP feature is extracted and integrated with Gabor multiorientation fusion histogram to give descriptors, which reflect static and dynamic texture information of facial expressions. Finally, a one-versus-one strategy based multiclass support vector machine (SVM classifier is applied to classify facial expressions. Experiments on Cohn-Kanade (CK + facial expression dataset illustrate that integrated framework outperforms methods using single descriptors. Compared with other state-of-the-art methods on CK+, MMI, and Oulu-CASIA VIS datasets, our proposed framework performs better.
Lindner, Christian; Dannlowski, Udo; Walhöfer, Kirsten; Rödiger, Maike; Maisch, Birgit; Bauer, Jochen; Ohrmann, Patricia; Lencer, Rebekka; Zwitserlood, Pienie; Kersting, Anette; Heindel, Walter; Arolt, Volker; Kugel, Harald; Suslow, Thomas
Among the functional neuroimaging studies on emotional face processing in schizophrenia, few have used paradigms with facial expressions of disgust. In this study, we investigated whether schizophrenia patients show less insula activation to macro-expressions (overt, clearly visible expressions) and micro-expressions (covert, very brief expressions) of disgust than healthy controls. Furthermore, departing from the assumption that disgust faces signal social rejection, we examined whether perceptual sensitivity to disgust is related to social alienation in patients and controls. We hypothesized that high insula responsiveness to facial disgust predicts social alienation. We used functional magnetic resonance imaging to measure insula activation in 36 schizophrenia patients and 40 healthy controls. During scanning, subjects passively viewed covert and overt presentations of disgust and neutral faces. To measure social alienation, a social loneliness scale and an agreeableness scale were administered. Schizophrenia patients exhibited reduced insula activation in response to covert facial expressions of disgust. With respect to macro-expressions of disgust, no between-group differences emerged. In patients, insula responsiveness to covert faces of disgust was positively correlated with social loneliness. Furthermore, patients' insula responsiveness to covert and overt faces of disgust was negatively correlated with agreeableness. In controls, insula responsiveness to covert expressions of disgust correlated negatively with agreeableness. Schizophrenia patients show reduced insula responsiveness to micro-expressions but not macro-expressions of disgust compared to healthy controls. In patients, low agreeableness was associated with stronger insula response to micro- and macro-expressions of disgust. Patients with a strong tendency to feel uncomfortable with social interactions appear to be characterized by a high sensitivity for facial expression signaling social
Aleman, André; Swart, Marte
The facial expression of contempt has been regarded to communicate feelings of moral superiority. Contempt is an emotion that is closely related to disgust, but in contrast to disgust, contempt is inherently interpersonal and hierarchical. The aim of this study was twofold. First, to investigate the hypothesis of preferential amygdala responses to contempt expressions versus disgust. Second, to investigate whether, at a neural level, men would respond stronger to biological signals of interpersonal superiority (e.g., contempt) than women. We performed an experiment using functional magnetic resonance imaging (fMRI), in which participants watched facial expressions of contempt and disgust in addition to neutral expressions. The faces were presented as distractors in an oddball task in which participants had to react to one target face. Facial expressions of contempt and disgust activated a network of brain regions, including prefrontal areas (superior, middle and medial prefrontal gyrus), anterior cingulate, insula, amygdala, parietal cortex, fusiform gyrus, occipital cortex, putamen and thalamus. Contemptuous faces did not elicit stronger amygdala activation than did disgusted expressions. To limit the number of statistical comparisons, we confined our analyses of sex differences to the frontal and temporal lobes. Men displayed stronger brain activation than women to facial expressions of contempt in the medial frontal gyrus, inferior frontal gyrus, and superior temporal gyrus. Conversely, women showed stronger neural responses than men to facial expressions of disgust. In addition, the effect of stimulus sex differed for men versus women. Specifically, women showed stronger responses to male contemptuous faces (as compared to female expressions), in the insula and middle frontal gyrus. Contempt has been conceptualized as signaling perceived moral violations of social hierarchy, whereas disgust would signal violations of physical purity. Thus, our results suggest a
Aleman, André; Swart, Marte
The facial expression of contempt has been regarded to communicate feelings of moral superiority. Contempt is an emotion that is closely related to disgust, but in contrast to disgust, contempt is inherently interpersonal and hierarchical. The aim of this study was twofold. First, to investigate the hypothesis of preferential amygdala responses to contempt expressions versus disgust. Second, to investigate whether, at a neural level, men would respond stronger to biological signals of interpersonal superiority (e.g., contempt) than women. We performed an experiment using functional magnetic resonance imaging (fMRI), in which participants watched facial expressions of contempt and disgust in addition to neutral expressions. The faces were presented as distractors in an oddball task in which participants had to react to one target face. Facial expressions of contempt and disgust activated a network of brain regions, including prefrontal areas (superior, middle and medial prefrontal gyrus), anterior cingulate, insula, amygdala, parietal cortex, fusiform gyrus, occipital cortex, putamen and thalamus. Contemptuous faces did not elicit stronger amygdala activation than did disgusted expressions. To limit the number of statistical comparisons, we confined our analyses of sex differences to the frontal and temporal lobes. Men displayed stronger brain activation than women to facial expressions of contempt in the medial frontal gyrus, inferior frontal gyrus, and superior temporal gyrus. Conversely, women showed stronger neural responses than men to facial expressions of disgust. In addition, the effect of stimulus sex differed for men versus women. Specifically, women showed stronger responses to male contemptuous faces (as compared to female expressions), in the insula and middle frontal gyrus. Contempt has been conceptualized as signaling perceived moral violations of social hierarchy, whereas disgust would signal violations of physical purity. Thus, our results suggest a
Full Text Available The facial expression of contempt has been regarded to communicate feelings of moral superiority. Contempt is an emotion that is closely related to disgust, but in contrast to disgust, contempt is inherently interpersonal and hierarchical. The aim of this study was twofold. First, to investigate the hypothesis of preferential amygdala responses to contempt expressions versus disgust. Second, to investigate whether, at a neural level, men would respond stronger to biological signals of interpersonal superiority (e.g., contempt than women. We performed an experiment using functional magnetic resonance imaging (fMRI, in which participants watched facial expressions of contempt and disgust in addition to neutral expressions. The faces were presented as distractors in an oddball task in which participants had to react to one target face. Facial expressions of contempt and disgust activated a network of brain regions, including prefrontal areas (superior, middle and medial prefrontal gyrus, anterior cingulate, insula, amygdala, parietal cortex, fusiform gyrus, occipital cortex, putamen and thalamus. Contemptuous faces did not elicit stronger amygdala activation than did disgusted expressions. To limit the number of statistical comparisons, we confined our analyses of sex differences to the frontal and temporal lobes. Men displayed stronger brain activation than women to facial expressions of contempt in the medial frontal gyrus, inferior frontal gyrus, and superior temporal gyrus. Conversely, women showed stronger neural responses than men to facial expressions of disgust. In addition, the effect of stimulus sex differed for men versus women. Specifically, women showed stronger responses to male contemptuous faces (as compared to female expressions, in the insula and middle frontal gyrus. Contempt has been conceptualized as signaling perceived moral violations of social hierarchy, whereas disgust would signal violations of physical purity. Thus, our
Widen, Sherri C; Christy, Anita M; Hewett, Kristen; Russell, James A
Shame, embarrassment, compassion, and contempt have been considered candidates for the status of basic emotions on the grounds that each has a recognisable facial expression. In two studies (N=88, N=60) on recognition of these four facial expressions, observers showed moderate agreement on the predicted emotion when assessed with forced choice (58%; 42%), but low agreement when assessed with free labelling (18%; 16%). Thus, even though some observers endorsed the predicted emotion when it was presented in a list, over 80% spontaneously interpreted these faces in a way other than the predicted emotion.
Rymarczyk, Krystyna; Żurawski, Łukasz; Jankowiak-Siuda, Kamila; Szatkowska, Iwona
Facial mimicry (FM) is an automatic response to imitate the facial expressions of others. However, neural correlates of the phenomenon are as yet not well established. We investigated this issue using simultaneously recorded EMG and BOLD signals during perception of dynamic and static emotional facial expressions of happiness and anger. During display presentations, BOLD signals and zygomaticus major (ZM), corrugator supercilii (CS) and orbicularis oculi (OO) EMG responses were recorded simultaneously from 46 healthy individuals. Subjects reacted spontaneously to happy facial expressions with increased EMG activity in ZM and OO muscles and decreased CS activity, which was interpreted as FM. Facial muscle responses correlated with BOLD activity in regions associated with motor simulation of facial expressions [i.e., inferior frontal gyrus, a classical Mirror Neuron System (MNS)]. Further, we also found correlations for regions associated with emotional processing (i.e., insula, part of the extended MNS). It is concluded that FM involves both motor and emotional brain structures, especially during perception of natural emotional expressions. PMID:29467691
Sormaz, Mladen; Watson, David M; Smith, William A P; Young, Andrew W; Andrews, Timothy J
The ability to perceive facial expressions of emotion is essential for effective social communication. We investigated how the perception of facial expression emerges from the image properties that convey this important social signal, and how neural responses in face-selective brain regions might track these properties. To do this, we measured the perceptual similarity between expressions of basic emotions, and investigated how this is reflected in image measures and in the neural response of different face-selective regions. We show that the perceptual similarity of different facial expressions (fear, anger, disgust, sadness, happiness) can be predicted by both surface and feature shape information in the image. Using block design fMRI, we found that the perceptual similarity of expressions could also be predicted from the patterns of neural response in the face-selective posterior superior temporal sulcus (STS), but not in the fusiform face area (FFA). These results show that the perception of facial expression is dependent on the shape and surface properties of the image and on the activity of specific face-selective regions. Copyright © 2016 Elsevier Inc. All rights reserved.
Qiu, Fanghui; Han, Mingxiu; Zhai, Yu; Jia, Shiwei
According to the well-established categorical perception (CP) of facial expressions, we decode complicated expression signals into simplified categories to facilitate expression processing. Expression processing deficits have been widely described in social anxiety (SA), but it remains to be investigated whether CP of expressions are affected by SA. The present study examined whether individuals with SA had an interpretation bias when processing ambiguous expressions and whether the sensitivity of their CP was affected by their SA. Sixty-four participants (high SA, 30; low SA, 34) were selected from 658 undergraduates using the Interaction Anxiousness Scale (IAS). With the CP paradigm, specifically with the analysis method of the logistic function model, we derived the categorical boundaries (reflecting interpretation bias) and slopes (reflecting sensitivity of CP) of both high- and low-SA groups while recognizing angry-fearful, happy-angry, and happy-fearful expression continua. Based on a comparison of the categorical boundaries and slopes between the high- and low-SA groups, the results showed that the categorical boundaries between the two groups were not different for any of the three continua, which means that the SA does not affect the interpretation bias for any of the three continua. The slopes for the high-SA group were flatter than those for the low-SA group for both the angry-fearful and happy-angry continua, indicating that the high-SA group is insensitive to the subtle changes that occur from angry to fearful faces and from happy to angry faces. Since participants were selected from a sample of undergraduates based on their IAS scores, the results cannot be directly generalized to individuals with clinical SA disorder. The study indicates that SA does not affect interpretation biases in the processing of anger, fear, and happiness, but does modulate the sensitivity of individuals' CP when anger appears. High-SA individuals perceive angry expressions
Full Text Available Facial animation is one of the most popular 3D animation topics researched in recent years. However, when using facial animation, a 3D facial animation model has to be stored. This 3D facial animation model requires many triangles to accurately describe and demonstrate facial expression animation because the face often presents a number of different expressions. Consequently, the costs associated with facial animation have increased rapidly. In an effort to reduce storage costs, researchers have sought to simplify 3D animation models using techniques such as Deformation Sensitive Decimation and Feature Edge Quadric. The studies conducted have examined the problems in the homogeneity of the local coordinate system between different expression models and in the retainment of simplified model characteristics. This paper proposes a method that applies Homogeneous Coordinate Transformation Matrix to solve the problem of homogeneity of the local coordinate system and Maximum Shape Operator to detect shape changes in facial animation so as to properly preserve the features of facial expressions. Further, root mean square error and perceived quality error are used to compare the errors generated by different simplification methods in experiments. Experimental results show that, compared with Deformation Sensitive Decimation and Feature Edge Quadric, our method can not only reduce the errors caused by simplification of facial animation, but also retain more facial features.
MacDonald, P M; Kirkpatrick, S W; Sullivan, L A
Schematic drawings of facial expressions were evaluated as a possible assessment tool for research on emotion recognition and interpretation involving young children. A subset of Ekman and Friesen's (1976) Pictures of Facial Affect was used as the standard for comparison. Preschool children (N = 138) were shown drawing and photographs in two context conditions for six emotions (anger, disgust, fear, happiness, sadness, and surprise). The overall correlation between accuracy for the photographs and drawings was .677. A significant difference was found for the stimulus condition (photographs vs. drawings) but not for the administration condition (label-based vs. context-based). Children were significantly more accurate in interpreting drawings than photographs and tended to be more accurate in identifying facial expressions in the label-based administration condition for both photographs and drawings than in the context-based administration condition.
Liu, Chang Hong; Chen, Wenfeng; Ward, James
Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.
Alonso-Recio, L; Serrano-Rodriguez, J M; Carvajal-Molina, F; Loeches-Alonso, A; Martin-Plasencia, P
Emotional facial expression is a basic guide during social interaction and, therefore, alterations in their expression or recognition are important limitations for communication. To examine facial expression recognition abilities and their possible impairment in Parkinson's disease. First, we review the studies on this topic which have not found entirely similar results. Second, we analyze the factors that may explain these discrepancies and, in particular, as third objective, we consider the relationship between emotional recognition problems and cognitive impairment associated with the disease. Finally, we propose alternatives strategies for the development of studies that could clarify the state of these abilities in Parkinson's disease. Most studies suggest deficits in facial expression recognition, especially in those with negative emotional content. However, it is possible that these alterations are related to those that also appear in the course of the disease in other perceptual and executive processes. To advance in this issue, we consider necessary to design emotional recognition studies implicating differentially the executive or visuospatial processes, and/or contrasting cognitive abilities with facial expressions and non emotional stimuli. The precision of the status of these abilities, as well as increase our knowledge of the functional consequences of the characteristic brain damage in the disease, may indicate if we should pay special attention in their rehabilitation inside the programs implemented.
Han, Dongxu; Al Jawad, Naseer; Du, Hongbo
Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.
Wenzler, Sofia; Levine, Sarah; van Dick, Rolf; Oertel-Knöchel, Viola; Aviezer, Hillel
According to psychological models as well as common intuition, intense positive and negative situations evoke highly distinct emotional expressions. Nevertheless, recent work has shown that when judging isolated faces, the affective valence of winning and losing professional tennis players is hard to differentiate. However, expressions produced by professional athletes during publicly broadcasted sports events may be strategically controlled. To shed light on this matter we examined if ordinary people's spontaneous facial expressions evoked during highly intense situations are diagnostic for the situational valence. In Experiment 1 we compared reactions with highly intense positive situations (surprise soldier reunions) versus highly intense negative situations (terror attacks). In Experiment 2, we turned to children and compared facial reactions with highly positive situations (e.g., a child receiving a surprise trip to Disneyland) versus highly negative situations (e.g., a child discovering her parents ate up all her Halloween candy). The results demonstrate that facial expressions of both adults and children are often not diagnostic for the valence of the situation. These findings demonstrate the ambiguity of extreme facial expressions and highlight the importance of context in everyday emotion perception. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Perry, Anat; Aviezer, Hillel; Goldstein, Pavel; Palgi, Sharon; Klein, Ehud; Shamay-Tsoory, Simone G
The neuropeptide oxytocin (OT) has been repeatedly reported to play an essential role in the regulation of social cognition in humans in general, and specifically in enhancing the recognition of emotions from facial expressions. The later was assessed in different paradigms that rely primarily on isolated and decontextualized emotional faces. However, recent evidence has indicated that the perception of basic facial expressions is not context invariant and can be categorically altered by context, especially body context, at early perceptual levels. Body context has a strong effect on our perception of emotional expressions, especially when the actual target face and the contextually expected face are perceptually similar. To examine whether and how OT affects emotion recognition, we investigated the role of OT in categorizing facial expressions in incongruent body contexts. Our results show that in the combined process of deciphering emotions from facial expressions and from context, OT gives an advantage to the face. This advantage is most evident when the target face and the contextually expected face are perceptually similar. Copyright © 2013 Elsevier Ltd. All rights reserved.
Vrticka, Pascal; Lordier, Lara; Bediou, Benoît; Sander, David
Although brain imaging evidence accumulates to suggest that the amygdala plays a key role in the processing of novel stimuli, only little is known about its role in processing expressed novelty conveyed by surprised faces, and even less about possible interactive encoding of novelty and valence. Those investigations that have already probed human amygdala involvement in the processing of surprised facial expressions either used static pictures displaying negative surprise (as contained in fear) or "neutral" surprise, and manipulated valence by contextually priming or subjectively associating static surprise with either negative or positive information. Therefore, it still remains unresolved how the human amygdala differentially processes dynamic surprised facial expressions displaying either positive or negative surprise. Here, we created new artificial dynamic 3-dimensional facial expressions conveying surprise with an intrinsic positive (wonderment) or negative (fear) connotation, but also intrinsic positive (joy) or negative (anxiety) emotions not containing any surprise, in addition to neutral facial displays either containing ("typical surprise" expression) or not containing ("neutral") surprise. Results showed heightened amygdala activity to faces containing positive (vs. negative) surprise, which may either correspond to a specific wonderment effect as such, or to the computation of a negative expected value prediction error. Findings are discussed in the light of data obtained from a closely matched nonsocial lottery task, which revealed overlapping activity within the left amygdala to unexpected positive outcomes. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Goodfellow, Stephanie; Nowicki, Stephen, Jr.
The authors aimed to examine the possible association between (a) accurately reading emotion in facial expressions and (b) social and academic competence among elementary school-aged children. Participants were 840 7-year-old children who completed a test of the ability to read emotion in facial expressions. Teachers rated children's social and…
Widen, Sherri C.; Russell, James A.
Lay people and scientists alike assume that, especially for young children, facial expressions are a strong cue to another's emotion. We report a study in which children (N=120; 3-4 years) described events that would cause basic emotions (surprise, fear, anger, disgust, sadness) presented as its facial expression, as its label, or as its…
Kret, M.E.; de Gelder, B.
Previous reports have suggested an enhancement of facial expression recognition in women as compared to men. It has also been suggested that men versus women have a greater attentional bias towards angry cues. Research has shown that facial expression recognition impairments and attentional biases
Grossman, Ruth B.; Edelson, Lisa R.; Tager-Flusberg, Helen
Purpose: People with high-functioning autism (HFA) have qualitative differences in facial expression and prosody production, which are rarely systematically quantified. The authors' goals were to qualitatively and quantitatively analyze prosody and facial expression productions in children and adolescents with HFA. Method: Participants were 22…
Full Text Available Emotional facial expressions are immediate indicators of the affective dispositions of others. Recently it has been shown that early stages of social perception can already be influenced by (implicit attributions made by the observer about the agent's mental state and intentions. In the current study possible mechanisms underpinning distortions in the perception of dynamic, ecologically-valid, facial expressions were explored. In four experiments we examined to what extent basic perceptual processes such as contrast/context effects, adaptation and representational momentum underpinned the perceptual distortions, and to what extent 'emotional anticipation', i.e. the involuntary anticipation of the other's emotional state of mind on the basis of the immediate perceptual history, might have played a role. Neutral facial expressions displayed at the end of short video-clips, in which an initial facial expression of joy or anger gradually morphed into a neutral expression, were misjudged as being slightly angry or slightly happy, respectively (Experiment 1. This response bias disappeared when the actor's identity changed in the final neutral expression (Experiment 2. Videos depicting neutral-to-joy-to-neutral and neutral-to-anger-to-neutral sequences again produced biases but in opposite direction (Experiment 3. The bias survived insertion of a 400 ms blank (Experiment 4. These results suggested that the perceptual distortions were not caused by any of the low-level perceptual mechanisms (adaptation, representational momentum and contrast effects. We speculate that especially when presented with dynamic, facial expressions, perceptual distortions occur that reflect 'emotional anticipation' (a low-level mindreading mechanism, which overrules low-level visual mechanisms. Underpinning neural mechanisms are discussed in relation to the current debate on action and emotion understanding.
Palumbo, Letizia; Jellema, Tjeerd
Emotional facial expressions are immediate indicators of the affective dispositions of others. Recently it has been shown that early stages of social perception can already be influenced by (implicit) attributions made by the observer about the agent's mental state and intentions. In the current study possible mechanisms underpinning distortions in the perception of dynamic, ecologically-valid, facial expressions were explored. In four experiments we examined to what extent basic perceptual processes such as contrast/context effects, adaptation and representational momentum underpinned the perceptual distortions, and to what extent 'emotional anticipation', i.e. the involuntary anticipation of the other's emotional state of mind on the basis of the immediate perceptual history, might have played a role. Neutral facial expressions displayed at the end of short video-clips, in which an initial facial expression of joy or anger gradually morphed into a neutral expression, were misjudged as being slightly angry or slightly happy, respectively (Experiment 1). This response bias disappeared when the actor's identity changed in the final neutral expression (Experiment 2). Videos depicting neutral-to-joy-to-neutral and neutral-to-anger-to-neutral sequences again produced biases but in opposite direction (Experiment 3). The bias survived insertion of a 400 ms blank (Experiment 4). These results suggested that the perceptual distortions were not caused by any of the low-level perceptual mechanisms (adaptation, representational momentum and contrast effects). We speculate that especially when presented with dynamic, facial expressions, perceptual distortions occur that reflect 'emotional anticipation' (a low-level mindreading mechanism), which overrules low-level visual mechanisms. Underpinning neural mechanisms are discussed in relation to the current debate on action and emotion understanding.
Zhi, Ruicong; Wan, Jingwei; Zhang, Dezheng; Li, Weiping
Emotional reactions towards products play an essential role in consumers' decision making, and are more important than rational evaluation of sensory attributes. It is crucial to understand consumers' emotion, and the relationship between sensory properties, human liking and choice. There are many inconsistencies between Asian and Western consumers in the usage of hedonic scale, as well as the intensity of facial reactions, due to different culture and consuming habits. However, very few studies discussed the facial responses characteristics of Asian consumers during food consumption. In this paper, explicit liking measurement (hedonic scale) and implicit emotional measurement (facial expressions) were evaluated to judge the consumers' emotions elicited by five types of juices. The contributions of this study included: (1) Constructed the relationship model between hedonic liking and facial expressions analyzed by face reading technology. Negative emotions "sadness", "anger", and "disgust" showed noticeable high negative correlation tendency to hedonic scores. The "liking" hedonic scores could be characterized by positive emotion "happiness". (2) Several emotional intensity based parameters, especially dynamic parameter, were extracted to describe the facial characteristic in sensory evaluation procedure. Both amplitude information and frequency information were involved in the dynamic parameters to remain more information of the emotional responses signals. From the comparison of four types of emotional descriptive parameters, the maximum parameter and dynamic parameter were suggested to be utilized for representing emotional state and intensities. Copyright © 2018 Elsevier Ltd. All rights reserved.
Heshmati, Saeideh; Sbarra, David A; Mason, Ashley E
The importance of studying specific and expressed emotions after a stressful life event is well known, yet few studies have moved beyond assessing self-reported emotional responses to a romantic breakup. This study examined associations between computer-recognized facial expressions and self-reported breakup-related distress among recently separated college-aged young adults ( N = 135; 37 men) on four visits across 9 weeks. Participants' facial expressions were coded using the Computer Expression Recognition Toolbox while participants spoke about their breakups. Of the seven expressed emotions studied, only Contempt showed a unique association with breakup-related distress over time. At baseline, greater Contempt was associated with less breakup-related distress; however, over time, greater Contempt was associated with greater breakup-related distress.
Park, BoKyung; Tsai, Jeanne L; Chim, Louise; Blevins, Elizabeth; Knutson, Brian
European Americans value excitement more and calm less than Chinese. Within cultures, European Americans value excited and calm states similarly, whereas Chinese value calm more than excited states. To examine how these cultural differences influence people's immediate responses to excited vs calm facial expressions, we combined a facial rating task with functional magnetic resonance imaging. During scanning, European American (n = 19) and Chinese (n = 19) females viewed and rated faces that varied by expression (excited, calm), ethnicity (White, Asian) and gender (male, female). As predicted, European Americans showed greater activity in circuits associated with affect and reward (bilateral ventral striatum, left caudate) while viewing excited vs calm expressions than did Chinese. Within cultures, European Americans responded to excited vs calm expressions similarly, whereas Chinese showed greater activity in these circuits in response to calm vs excited expressions regardless of targets' ethnicity or gender. Across cultural groups, greater ventral striatal activity while viewing excited vs. calm expressions predicted greater preference for excited vs calm expressions months later. These findings provide neural evidence that people find viewing the specific positive facial expressions valued by their cultures to be rewarding and relevant. © The Author (2015). Published by Oxford University Press. For Permissions, please email: firstname.lastname@example.org.
Park, BoKyung; Chim, Louise; Blevins, Elizabeth; Knutson, Brian
European Americans value excitement more and calm less than Chinese. Within cultures, European Americans value excited and calm states similarly, whereas Chinese value calm more than excited states. To examine how these cultural differences influence people’s immediate responses to excited vs calm facial expressions, we combined a facial rating task with functional magnetic resonance imaging. During scanning, European American (n = 19) and Chinese (n = 19) females viewed and rated faces that varied by expression (excited, calm), ethnicity (White, Asian) and gender (male, female). As predicted, European Americans showed greater activity in circuits associated with affect and reward (bilateral ventral striatum, left caudate) while viewing excited vs calm expressions than did Chinese. Within cultures, European Americans responded to excited vs calm expressions similarly, whereas Chinese showed greater activity in these circuits in response to calm vs excited expressions regardless of targets’ ethnicity or gender. Across cultural groups, greater ventral striatal activity while viewing excited vs. calm expressions predicted greater preference for excited vs calm expressions months later. These findings provide neural evidence that people find viewing the specific positive facial expressions valued by their cultures to be rewarding and relevant. PMID:26342220
Martinez, Aleix; Du, Shichuan
In cognitive science and neuroscience, there have been two leading models describing how humans perceive and classify facial expressions of emotion-the continuous and the categorical model. The continuous model defines each facial expression of emotion as a feature vector in a face space. This model explains, for example, how expressions of emotion can be seen at different intensities. In contrast, the categorical model consists of C classifiers, each tuned to a specific emotion category. This model explains, among other findings, why the images in a morphing sequence between a happy and a surprise face are perceived as either happy or surprise but not something in between. While the continuous model has a more difficult time justifying this latter finding, the categorical model is not as good when it comes to explaining how expressions are recognized at different intensities or modes. Most importantly, both models have problems explaining how one can recognize combinations of emotion categories such as happily surprised versus angrily surprised versus surprise. To resolve these issues, in the past several years, we have worked on a revised model that justifies the results reported in the cognitive science and neuroscience literature. This model consists of C distinct continuous spaces. Multiple (compound) emotion categories can be recognized by linearly combining these C face spaces. The dimensions of these spaces are shown to be mostly configural. According to this model, the major task for the classification of facial expressions of emotion is precise, detailed detection of facial landmarks rather than recognition. We provide an overview of the literature justifying the model, show how the resulting model can be employed to build algorithms for the recognition of facial expression of emotion, and propose research directions in machine learning and computer vision researchers to keep pushing the state of the art in these areas. We also discuss how the model can
Nummenmaa, Lauri; Calvo, Manuel G
Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).
Bui, T.D.; Heylen, Dirk K.J.; Poel, Mannes; Nijholt, Antinus; Stumptner, Markus; Corbett, Dan; Brooks, Mike
We propose a fuzzy rule-based system to map representations of the emotional state of an animated agent onto muscle contraction values for the appropriate facial expressions. Our implementation pays special attention to the way in which continuous changes in the intensity of emotions can be
Kristensen, Rasmus Lyngby; Tan, Zheng-Hua; Ma, Zhanyu
This paper conducts a survey of modern binary pattern flavored feature extractors applied to the Facial Expression Recognition (FER) problem. In total, 26 different feature extractors are included, of which six are selected for in depth description. In addition, the paper unifies important FER...
Wang, Yifang; Su, Yanjie; Fang, Ping; Zhou, Qingxia
Tager-Flusberg and Sullivan (2000) presented a cognitive model of theory of mind (ToM), in which they thought ToM included two components--a social-perceptual component and a social-cognitive component. Facial expression recognition (FER) is an ability tapping the social-perceptual component. Previous findings suggested that normal hearing…
Lacroix, Agnes; Guidetti, Michele; Roge, Bernadette; Reilly, Judy
The aim of our study was to compare two neurodevelopmental disorders (Williams syndrome and autism) in terms of the ability to recognize emotional and nonemotional facial expressions. The comparison of these two disorders is particularly relevant to the investigation of face processing and should contribute to a better understanding of social…
Sandbach, G.; Zafeiriou, S.; Pantic, Maja; Yin, Lijun
Automatic facial expression recognition constitutes an active research field due to the latest advances in computing technology that make the user's experience a clear priority. The majority of work conducted in this area involves 2D imagery, despite the problems this presents due to inherent pose
Jabbi, Mbemba; Keysers, Christian
The observation of movies of facial expressions of others has been shown to recruit similar areas involved in experiencing one's own emotions: the inferior frontal gyrus (IFG). the anterior insula and adjacent frontal operculum (IFO). The Causal link bet between activity in these 2 regions,
The study investigated the recognition of standardized facial expressions of emotion (anger, fear, disgust, happiness, sadness, surprise) at a perceptual level (experiment 1) and at a semantic level (experiments 2 and 3) in children with autism (N= 20) and normally developing children (N= 20). Results revealed that children with autism were as…
De Haan, Michelle; Belsky, Jay; Reid, Vincent; Volein, Agnes; Johnson, Mark H.
Background: Recent investigations suggest that experience plays an important role in the development of face processing. The aim of this study was to investigate the potential role of experience in the development of the ability to process facial expressions of emotion. Method: We examined the potential role of experience indirectly by…
Sato, Wataru; Kochiyama, Takanori; Uono, Shota
The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150–200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300–350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual–motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions. PMID:26206708
Geangu, Elena; Ichikawa, Hiroko; Lao, Junpeng; Kanazawa, So; Yamaguchi, Masami K; Caldara, Roberto; Turati, Chiara
Emotional facial expressions are thought to have evolved because they play a crucial role in species' survival. From infancy, humans develop dedicated neural circuits  to exhibit and recognize a variety of facial expressions . But there is increasing evidence that culture specifies when and how certain emotions can be expressed - social norms - and that the mature perceptual mechanisms used to transmit and decode the visual information from emotional signals differ between Western and Eastern adults [3-5]. Specifically, the mouth is more informative for transmitting emotional signals in Westerners and the eye region for Easterners , generating culture-specific fixation biases towards these features . During development, it is recognized that cultural differences can be observed at the level of emotional reactivity and regulation , and to the culturally dominant modes of attention . Nonetheless, to our knowledge no study has explored whether culture shapes the processing of facial emotional signals early in development. The data we report here show that, by 7 months, infants from both cultures visually discriminate facial expressions of emotion by relying on culturally distinct fixation strategies, resembling those used by the adults from the environment in which they develop . Copyright © 2016 Elsevier Ltd. All rights reserved.
Collis, L.; Moss, J.; Jutley, J.; Cornish, K.; Oliver, C.
Background: Individuals with Cornelia de Lange syndrome (CdLS) have been reported to show comparatively high levels of flat and negative affect but there have been no empirical evaluations. In this study, we use an objective measure of facial expression to compare affect in CdLS with that seen in Cri du Chat syndrome (CDC) and a group of…
Sato, Wataru; Kochiyama, Takanori; Uono, Shota
The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150-200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300-350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual-motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions.
Aleman, Andre; Swart, Marte
The facial expression of contempt has been regarded to communicate feelings of moral superiority. Contempt is an emotion that is closely related to disgust, but in contrast to disgust, contempt is inherently interpersonal and hierarchical. The aim of this study was twofold. First, to investigate the
Vaiman, M.; Wagner, M.A.; Caicedo, E.; Pereno, G.L.
Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion
Widen, Sherri C.; Russell, James A.
Understanding and recognition of emotions relies on emotion concepts, which are narrative structures (scripts) specifying facial expressions, causes, consequences, label, etc. organized in a temporal and causal order. Scripts and their development are revealed by examining which components better tap which concepts at which ages. This study…
Markodimitraki, Maria; Kypriotaki, Maria; Ampartzaki, Maria; Manolitsis, George
The present study explored the effect of the context in which an imitation act occurs (elicited/spontaneous) and the experimenter's facial expression (neutral or smiling) during the imitation task with young children with autism and typically developing children. The participants were 10 typically developing children and 10 children with autism…
Freitag, Claudia; Schwarzer, Gudrun
Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…
Garrett, Amy S.; Reiss, Allan L.; Howe, Meghan E.; Kelley, Ryan G.; Singh, Manpreet K.; Adleman, Nancy E.; Karchemskiy, Asya; Chang, Kiki D.
Objective: Previous functional magnetic resonance imaging (fMRI) studies in pediatric bipolar disorder (BD) have reported greater amygdala and less dorsolateral prefrontal cortex (DLPFC) activation to facial expressions compared to healthy controls. The current study investigates whether these differences are associated with the early or late…
Carvajal, Fernando; Fernandez-Alcaraz, Camino; Rueda, Maria; Sarrion, Louise
The processing of facial expressions of emotions by 23 adults with Down syndrome and moderate intellectual disability was compared with that of adults with intellectual disability of other etiologies (24 matched in cognitive level and 26 with mild intellectual disability). Each participant performed 4 tasks of the Florida Affect Battery and an…
Huggenberger, Harriet J.; Suter, Susanne E.; Reijnen, Ester; Schachinger, Hartmut
Women's cradling side preference has been related to contralateral hemispheric specialization of processing emotional signals; but not of processing baby's facial expression. Therefore, 46 nulliparous female volunteers were characterized as left or non-left holders (HG) during a doll holding task. During a signal detection task they were then…
Strand, Paul S.; Downs, Andrew; Barbosa-Leiker, Celestina
The authors explored predictions from basic emotion theory (BET) that facial emotion expression recognition skills are insular with respect to their own development, and yet foundational to the development of emotional perspective-taking skills. Participants included 417 preschool children for whom estimates of these 2 emotion understanding…
Jeon, Hana; Moulson, Margaret C.; Fox, Nathan; Zeanah, Charles; Nelson, Charles A., III
The current study examined the effects of institutionalization on the discrimination of facial expressions of emotion in three groups of 42-month-old children. One group consisted of children abandoned at birth who were randomly assigned to Care-as-Usual (institutional care) following a baseline assessment. Another group consisted of children…
Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.
Recognition of students' facial expressions can be used to understand their level of attention. In a traditional classroom setting, teachers guide the classes and continuously monitor and engage the students to evaluate their understanding and progress. Given the current popularity of e-learning environments, it has become important to assess the…
Full Text Available To examine individual differences in adults’ sensitivity to facial expressions, we used a novel method that has proved revealing in studies of developmental change. Using static faces morphed to show different intensities of facial expressions, we calculated two measures: (1 the threshold to detect that a low intensity facial expression is different from neutral, and (2 accuracy in recognizing the specific facial expression in faces above the detection threshold. We conducted two experiments with young adult females varying in reported temperamental shyness and sociability - the former trait is known to influence the recognition of facial expressions during childhood. In both experiments, the measures had good split half reliability. Because shyness was significantly negatively correlated with sociability, we used partial correlations to examine the relation of each to sensitivity to facial expression. Sociability was negatively related to threshold to detect fear (Experiment 1 and to misidentify fear as another expression or happy expressions as fear (Experiment 2. Both patterns are consistent with hypervigilance by less sociable individuals. Shyness was positively related to misidentification of fear as another emotion (Experiment 2, a pattern consistent with a history of avoidance. We discuss the advantages and limitations of this new approach for studying individual differences in sensitivity to facial expression.
Full Text Available Gaze direction cues and facial expressions have been shown to influence object processing in infants. For example, infants around 12 months of age utilize others' gaze directions and facial expressions to regulate their own behaviour toward an ambiguous target (i.e., social referencing. However, the mechanism by which social signals influence overt orienting in infants is unclear. The present study examined the effects of static gaze direction cues and facial expressions (neutral vs. fearful on overt orienting using a gaze-cueing paradigm in 6- and 12-month-old infants. Two experiments were conducted: in Experiment 1, a face with a leftward or rightward gaze direction was used as a cue, and a face with a forward gaze direction was added in Experiment 2. In both experiments, an effect of facial expression was found in 12-month-olds; no effect was found in 6-month-olds. Twelve-month-old infants exhibited more rapid overt orienting in response to fearful expressions than neutral expressions, irrespective of gaze direction. These findings suggest that gaze direction information and facial expressions independently influence overt orienting in infants, and the effect of facial expression emerges earlier than that of static gaze direction. Implications for the development of gaze direction and facial expression processing systems are discussed.
David Alberto Rodríguez Medina
Full Text Available Background: Recent research has evaluated psychological and biological characteristics associated with pain in survivors of breast cancer (BC. Few studies consider their relationship with inflammatory activity. Voluntary facial expressions modify the autonomic activity and this may be useful in the hospital environment for clinical biopsychosocial assessment of pain. Methods: This research compared a BC survivors group under integral treatment (Oncology, Psychology, Nutrition with a control group to assess the intensity of pain, behavioral interference, anxiety, depression, temperament-expression, anger control, social isolation, emotional regulation, and alexithymia and inflammatory activity, with salivary interleukin 6 (IL-6. Then, a psychophysiological evaluation through repeated measures of facial infrared thermal imaging (IRT and hands in baseline—positive facial expression (joy—negative facial expression (pain—relaxation (diaphragmatic breathing. Results: The results showed changes in the IRT (p < 0.05 during the execution of facial expressions in the chin, perinasal, periorbital, frontal, nose, and fingers areas in both groups. No differences were found in the IL-6 level among the aforementioned groups, but an association with baseline nasal temperature (p < 0.001 was observable. The BC group had higher alexithymia score (p < 0.01 but lower social isolation (p < 0.05, in comparison to the control group. Conclusions: In the low- and medium-concentration groups of IL-6, the psychophysiological intervention proposed in this study has a greater effect than on the high concentration group of IL-6. This will be considered in the design of psychological and psychosocial interventions for the treatment of pain.
De Silva, Liyanage C.; Vinod, V. V.; Sengupta, Kuntal
This paper presents a novel idea of a visual communication system, which can support distance teaching using a network of computers. Here the author's main focus is to enhance the quality of distance teaching by reducing the barrier between the teacher and the student, which is formed due to the remote connection of the networked participants. The paper presents an effective way of improving teacher-student communication link of an IT (Information Technology) based distance teaching scenario, using facial expression recognition results and face global and local motion detection results of both the teacher and the student. It presents a way of regenerating the facial images for the teacher-student down-link, which can enhance the teachers facial expressions and which also can reduce the network traffic compared to usual video broadcasting scenarios. At the same time, it presents a way of representing a large volume of facial expression data of the whole student population (in the student-teacher up-link). This up-link representation helps the teacher to receive an instant feed back of his talk, as if he was delivering a face to face lecture. In conventional video tele-conferencing type of applications, this task is nearly impossible, due to huge volume of upward network traffic. The authors utilize several of their previous publication results for most of the image processing components needs to be investigated to complete such a system. In addition, some of the remaining system components are covered by several on going work.
Foley, Elaine; Rippon, Gina; Thai, Ngoc Jade; Longe, Olivia; Senior, Carl
Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223-233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.
This study examined if there is a difference in viewer perception of computer animated character facial expressions based on character body style, specifically, realistic and stylized character body styles. Participants viewed twenty clips of computer animated characters expressing one of five emotions: sadness, happiness, anger, surprise and fear. They then named the emotion and rated the sincerity, intensity, and typicality of each clip. The results indicated that for recognition, participa...
Camras, Linda A; Oster, Harriet; Campos, Joseph J; Bakemand, Roger
Charles Darwin was among the first to recognize the important contribution that infant studies could make to our understanding of human emotional expression. Noting that infants come to exhibit many emotions, he also observed that at first their repertoire of expression is highly restricted. Today, considerable controversy exists regarding the question of whether infants experience and express discrete emotions. According to one position, discrete emotions emerge during infancy along with their prototypic facial expressions. These expressions closely resemble adult emotional expressions and are invariantly concordant with their corresponding emotions. In contrast, we propose that the relation between expression and emotion during infancy is more complex. Some infant emotions and emotional expressions may not be invariantly concordant. Furthermore, infant emotional expressions may be less differentiated than previously proposed. Together with past developmental studies, recent cross-cultural research supports this view and suggests that negative emotional expression in particular is only partly differentiated towards the end of the first year.
Calvo, Manuel G.; Fernandez-Martin, Andres; Nummenmaa, Lauri
Why is a face with a smile but non-happy eyes likely to be interpreted as happy? We used blended expressions in which a smiling mouth was incongruent with the eyes (e.g., angry eyes), as well as genuine expressions with congruent eyes and mouth (e.g., both happy or angry). Tasks involved detection of a smiling mouth (perceptual), categorization of…
Fernández-Dols, José-Miguel; Carrera, Pilar; Barchard, Kimberly A; Gacitua, Marta
This article examines the importance of semantic processes in the recognition of emotional expressions, through a series of three studies on false recognition. The first study found a high frequency of false recognition of prototypical expressions of emotion when participants viewed slides and video clips of nonprototypical fearful and happy expressions. The second study tested whether semantic processes caused false recognition. The authors found that participants made significantly higher error rates when asked to detect expressions that corresponded to semantic labels than when asked to detect visual stimuli. Finally, given that previous research reported that false memories are less prevalent in younger children, the third study tested whether false recognition of prototypical expressions increased with age. The authors found that 67% of eight- to nine-year-old children reported nonpresent prototypical expressions of fear in a fearful context, but only 40% of 6- to 7-year-old children did so. Taken together, these three studies demonstrate the importance of semantic processes in the detection and categorization of prototypical emotional expressions.
Schneevogt, Daniela; Paggio, Patrizia
subjects. We conduct an emotion recognition task followed by two stereotype question- naires with different genders and age groups. While recent findings (Krems et al., 2015) suggest that women are biased to see anger in neutral facial expressions posed by females, in our sample both genders assign higher...... ratings of anger to all emotions expressed by females. Furthermore, we demonstrate an effect of gender on the fear-surprise-confusion observed by Tomkins and McCarter (1964); females overpredict fear, while males overpredict surprise.......Recent studies have demonstrated gender and cultural differences in the recognition of emotions in facial expressions. However, most studies were conducted on American subjects. In this pa- per, we explore the generalizability of several findings to a non-American culture in the form of Danish...
Cutting, James E; Armstrong, Kacie L
The perception of facial expressions and objects at a distance are entrenched psychological research venues, but their intersection is not. We were motivated to study them together because of their joint importance in the physical composition of popular movies-shots that show a larger image of a face typically have shorter durations than those in which the face is smaller. For static images, we explore the time it takes viewers to categorize the valence of different facial expressions as a function of their visual size. In two studies, we find that smaller faces take longer to categorize than those that are larger, and this pattern interacts with local background clutter. More clutter creates crowding and impedes the interpretation of expressions for more distant faces but not proximal ones. Filmmakers at least tacitly know this. In two other studies, we show that contemporary movies lengthen shots that show smaller faces, and even more so with increased clutter.
Guesgen, M J; Beausoleil, N J; Leach, M; Minot, E O; Stewart, M; Stafford, K J
Facial expressions are routinely used to assess pain in humans, particularly those who are non-verbal. Recently, there has been an interest in developing coding systems for facial grimacing in non-human animals, such as rodents, rabbits, horses and sheep. The aims of this preliminary study were to: 1. Qualitatively identify facial feature changes in lambs experiencing pain as a result of tail-docking and compile these changes to create a Lamb Grimace Scale (LGS); 2. Determine whether human observers can use the LGS to differentiate tail-docked lambs from control lambs and differentiate lambs before and after docking; 3. Determine whether changes in facial action units of the LGS can be objectively quantified in lambs before and after docking; 4. Evaluate effects of restraint of lambs on observers' perceptions of pain using the LGS and on quantitative measures of facial action units. By comparing images of lambs before (no pain) and after (pain) tail-docking, the LGS was devised in consultation with scientists experienced in assessing facial expression in other species. The LGS consists of five facial action units: Orbital Tightening, Mouth Features, Nose Features, Cheek Flattening and Ear Posture. The aims of the study were addressed in two experiments. In Experiment I, still images of the faces of restrained lambs were taken from video footage before and after tail-docking (n=4) or sham tail-docking (n=3). These images were scored by a group of five naïve human observers using the LGS. Because lambs were restrained for the duration of the experiment, Ear Posture was not scored. The scores for the images were averaged to provide one value per feature per period and then scores for the four LGS action units were averaged to give one LGS score per lamb per period. In Experiment II, still images of the faces nine lambs were taken before and after tail-docking. Stills were taken when lambs were restrained and unrestrained in each period. A different group of five
Kessler, Henrik; Doyen-Waldecker, Cornelia; Hofer, Christian; Hoffmann, Holger; Traue, Harald C; Abler, Birgit
This study investigated brain areas involved in the perception of dynamic facial expressions of emotion. A group of 30 healthy subjects was measured with fMRI when passively viewing prototypical facial expressions of fear, disgust, sadness and happiness. Using morphing techniques, all faces were displayed as still images and also dynamically as a film clip with the expressions evolving from neutral to emotional. Irrespective of a specific emotion, dynamic stimuli selectively activated bilateral superior temporal sulcus, visual area V5, fusiform gyrus, thalamus and other frontal and parietal areas. Interaction effects of emotion and mode of presentation (static/dynamic) were only found for the expression of happiness, where static faces evoked greater activity in the medial prefrontal cortex. Our results confirm previous findings on neural correlates of the perception of dynamic facial expressions and are in line with studies showing the importance of the superior temporal sulcus and V5 in the perception of biological motion. Differential activation in the fusiform gyrus for dynamic stimuli stands in contrast to classical models of face perception but is coherent with new findings arguing for a more general role of the fusiform gyrus in the processing of socially relevant stimuli.
Fang, Xia; Sauter, Disa A; Van Kleef, Gerben A
Although perceivers often agree about the primary emotion that is conveyed by a particular expression, observers may concurrently perceive several additional emotions from a given facial expression. In the present research, we compared the perception of two types of nonintended emotions in Chinese and Dutch observers viewing facial expressions: emotions which were morphologically similar to the intended emotion and emotions which were morphologically dissimilar to the intended emotion. Findings were consistent across two studies and showed that (a) morphologically similar emotions were endorsed to a greater extent than dissimilar emotions and (b) Chinese observers endorsed nonintended emotions more than did Dutch observers. Furthermore, the difference between Chinese and Dutch observers was more pronounced for the endorsement of morphologically similar emotions than of dissimilar emotions. We also obtained consistent evidence that Dutch observers endorsed nonintended emotions that were congruent with the preceding expressions to a greater degree. These findings suggest that culture and morphological similarity both influence the extent to which perceivers see several emotions in a facial expression.
Fang, Xia; Sauter, Disa A.; Van Kleef, Gerben A.
Although perceivers often agree about the primary emotion that is conveyed by a particular expression, observers may concurrently perceive several additional emotions from a given facial expression. In the present research, we compared the perception of two types of nonintended emotions in Chinese and Dutch observers viewing facial expressions: emotions which were morphologically similar to the intended emotion and emotions which were morphologically dissimilar to the intended emotion. Findings were consistent across two studies and showed that (a) morphologically similar emotions were endorsed to a greater extent than dissimilar emotions and (b) Chinese observers endorsed nonintended emotions more than did Dutch observers. Furthermore, the difference between Chinese and Dutch observers was more pronounced for the endorsement of morphologically similar emotions than of dissimilar emotions. We also obtained consistent evidence that Dutch observers endorsed nonintended emotions that were congruent with the preceding expressions to a greater degree. These findings suggest that culture and morphological similarity both influence the extent to which perceivers see several emotions in a facial expression. PMID:29386689
Itier, Roxane J; Neath-Tavares, Karly N
Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200 to 350ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150 to 350ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350ms of visual processing. Copyright © 2017 Elsevier B.V. All rights reserved.
Sedda, Anna; Petito, Sara; Guarino, Maria; Stracciari, Andrea
Most of the studies since now show an impairment for facial displays of disgust recognition in Parkinson disease. A general impairment in disgust processing in patients with Parkinson disease might adversely affect their social interactions, given the relevance of this emotion for human relations. However, despite the importance of faces, disgust is also expressed through other format of visual stimuli such as sentences and visual images. The aim of our study was to explore disgust processing in a sample of patients affected by Parkinson disease, by means of various tests tackling not only facial recognition but also other format of visual stimuli through which disgust can be recognized. Our results confirm that patients are impaired in recognizing facial displays of disgust. Further analyses show that patients are also impaired and slower for other facial expressions, with the only exception of happiness. Notably however, patients with Parkinson disease processed visual images and sentences as controls. Our findings show a dissociation within different formats of visual stimuli of disgust, suggesting that Parkinson disease is not characterized by a general compromising of disgust processing, as often suggested. The involvement of the basal ganglia-frontal cortex system might spare some cognitive components of emotional processing, related to memory and culture, at least for disgust. Copyright © 2017 Elsevier B.V. All rights reserved.
Valente, Dannyelle; Theurel, Anne; Gentaz, Edouard
Facial expressions of emotion are nonverbal behaviors that allow us to interact efficiently in social life and respond to events affecting our welfare. This article reviews 21 studies, published between 1932 and 2015, examining the production of facial expressions of emotion by blind people. It particularly discusses the impact of visual experience on the development of this behavior from birth to adulthood. After a discussion of three methodological considerations, the review of studies reveals that blind subjects demonstrate differing capacities for producing spontaneous expressions and voluntarily posed expressions. Seventeen studies provided evidence that blind and sighted spontaneously produce the same pattern of facial expressions, even if some variations can be found, reflecting facial and body movements specific to blindness or differences in intensity and control of emotions in some specific contexts. This suggests that lack of visual experience seems to not have a major impact when this behavior is generated spontaneously in real emotional contexts. In contrast, eight studies examining voluntary expressions indicate that blind individuals have difficulty posing emotional expressions. The opportunity for prior visual observation seems to affect performance in this case. Finally, we discuss three new directions for research to provide additional and strong evidence for the debate regarding the innate or the culture-constant learning character of the production of emotional facial expressions by blind individuals: the link between perception and production of facial expressions, the impact of display rules in the absence of vision, and the role of other channels in expression of emotions in the context of blindness.
Ewald, Anais; Becker, Susanne; Heinrich, Angela; Banaschewski, Tobias; Poustka, Luise; Bokde, Arun; Büchel, Christian; Bromberg, Uli; Cattrell, Anna; Conrod, Patricia; Desrivières, Sylvane; Frouin, Vincent; Papadopoulos-Orfanos, Dimitri; Gallinat, Jürgen; Garavan, Hugh; Heinz, Andreas; Walter, Henrik; Ittermann, Bernd; Gowland, Penny; Paus, Tomáš; Martinot, Jean-Luc; Paillère Martinot, Marie-Laure; Smolka, Michael N; Vetter, Nora; Whelan, Rob; Schumann, Gunter; Flor, Herta; Nees, Frauke
The processing of emotional faces is an important prerequisite for adequate social interactions in daily life, and might thus specifically be altered in adolescence, a period marked by significant changes in social emotional processing. Previous research has shown that the cannabinoid receptor CB1R is associated with longer gaze duration and increased brain responses in the striatum to happy faces in adults, yet, for adolescents, it is not clear whether an association between CBR1 and face processing exists. In the present study we investigated genetic effects of the two CB1R polymorphisms, rs1049353 and rs806377, on the processing of emotional faces in healthy adolescents. They participated in functional magnetic resonance imaging during a Faces Task, watching blocks of video clips with angry and neutral facial expressions, and completed a Morphed Faces Task in the laboratory where they looked at different facial expressions that switched from anger to fear or sadness or from happiness to fear or sadness, and labelled them according to these four emotional expressions. A-allele versus GG-carriers in rs1049353 displayed earlier recognition of facial expressions changing from anger to sadness or fear, but not for expressions changing from happiness to sadness or fear, and higher brain responses to angry, but not neutral, faces in the amygdala and insula. For rs806377 no significant effects emerged. This suggests that rs1049353 is involved in the processing of negative facial expressions with relation to anger in adolescence. These findings add to our understanding of social emotion-related mechanisms in this life period. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.
Golouboff, Nathalie; Fiori, Nicole; Delalande, Olivier; Fohlen, Martine; Dellatolas, Georges; Jambaque, Isabelle
The amygdala has been implicated in the recognition of facial emotions, especially fearful expressions, in adults with early-onset right temporal lobe epilepsy (TLE). The present study investigates the recognition of facial emotions in children and adolescents, 8-16 years old, with epilepsy. Twenty-nine subjects had TLE (13 right, 16 left) and…
Trinkler, Iris; Cleret de Langavant, Laurent; Bachoud-Lévi, Anne-Catherine
Patients with Huntington's disease (HD), a neurodegenerative disorder that causes major motor impairments, also show cognitive and emotional deficits. While their deficit in recognising emotions has been explored in depth, little is known about their ability to express emotions and understand their feelings. If these faculties were impaired, patients might not only mis-read emotion expressions in others but their own emotions might be mis-interpreted by others as well, or thirdly, they might have difficulties understanding and describing their feelings. We compared the performance of recognition and expression of facial emotions in 13 HD patients with mild motor impairments but without significant bucco-facial abnormalities, and 13 controls matched for age and education. Emotion recognition was investigated in a forced-choice recognition test (FCR), and emotion expression by filming participants while they mimed the six basic emotional facial expressions (anger, disgust, fear, surprise, sadness and joy) to the experimenter. The films were then segmented into 60 stimuli per participant and four external raters performed a FCR on this material. Further, we tested understanding of feelings in self (alexithymia) and others (empathy) using questionnaires. Both recognition and expression were impaired across different emotions in HD compared to controls and recognition and expression scores were correlated. By contrast, alexithymia and empathy scores were very similar in HD and controls. This might suggest that emotion deficits in HD might be tied to the expression itself. Because similar emotion recognition-expression deficits are also found in Parkinson's Disease and vascular lesions of the striatum, our results further confirm the importance of the striatum for emotion recognition and expression, while access to the meaning of feelings relies on a different brain network, and is spared in HD. Copyright © 2011 Elsevier Ltd. All rights reserved.
Matsumoto, David; Willingham, Bob; Olide, Andres
There is consensus that when emotions are aroused, the displays of those emotions are either universal or culture-specific. We investigated the idea that an individual's emotional displays in a given context can be both universal and culturally variable, as they change over time. We examined the emotional displays of Olympic athletes across time, classified their expressive styles, and tested the association between those styles and a number of characteristics associated with the countries the athletes represented. Athletes from relatively urban, individualistic cultures expressed their emotions more, whereas athletes from less urban, collectivistic cultures masked their emotions more. These culturally influenced expressions occurred within a few seconds after initial, immediate, and universal emotional displays. Thus, universal and culture-specific emotional displays can unfold across time in an individual in a single context.
Bergström-Isacsson, Märith; Lagerkvist, Bengt; Holck, Ulla
to investigate if the Facial Action Coding System (FACS) could be used to identify facial expressions, and differentiate between those that expressed emotions and those that were elicited by abnormal brainstem activation in RTT. The sample comprised 29 participants with RTT and 11 children with a normal...... developmental pattern, exposed to six different musical stimuli during non-invasive registration of autonomic brainstem functions. The results indicate that FACS makes it possible both to identify facial expressions and to differentiate between those that stem from emotions and those caused by abnormal...
Full Text Available Only a few studies investigated whether animal phobics exhibit attentional biases in contexts where no phobic stimuli are present. Among these, recent studies provided evidence for a bias toward facial expressions of fear and disgust in animal phobics. Such findings may be due to the fact that these expressions could signal the presence of a phobic object in the surroundings. To test this hypothesis and further investigate attentional biases for emotional faces in animal phobics, we conducted an experiment using a gaze-cuing paradigm in which participants’ attention was driven by the task-irrelevant gaze of a centrally presented face. We employed dynamic negative facial expressions of disgust, fear and anger and found an enhanced gaze-cuing effect in snake phobics as compared to controls, irrespective of facial expression. These results provide evidence of a general hypervigilance in animal phobics in the absence of phobic stimuli, and indicate that research on specific phobias should not be limited to symptom provocation paradigms.
Drapeau, Joanie; Gosselin, Nathalie; Peretz, Isabelle; McKerral, Michelle
To assess emotion recognition from dynamic facial, vocal and musical expressions in sub-groups of adults with traumatic brain injuries (TBI) of different severities and identify possible common underlying mechanisms across domains. Forty-one adults participated in this study: 10 with moderate-severe TBI, nine with complicated mild TBI, 11 with uncomplicated mild TBI and 11 healthy controls, who were administered experimental (emotional recognition, valence-arousal) and control tasks (emotional and structural discrimination) for each domain. Recognition of fearful faces was significantly impaired in moderate-severe and in complicated mild TBI sub-groups, as compared to those with uncomplicated mild TBI and controls. Effect sizes were medium-large. Participants with lower GCS scores performed more poorly when recognizing fearful dynamic facial expressions. Emotion recognition from auditory domains was preserved following TBI, irrespective of severity. All groups performed equally on control tasks, indicating no perceptual disorders. Although emotional recognition from vocal and musical expressions was preserved, no correlation was found across auditory domains. This preliminary study may contribute to improving comprehension of emotional recognition following TBI. Future studies of larger samples could usefully include measures of functional impacts of recognition deficits for fearful facial expressions. These could help refine interventions for emotional recognition following a brain injury.
Meng, Hongying; Bianchi-Berthouze, Nadia
Naturalistic affective expressions change at a rate much slower than the typical rate at which video or audio is recorded. This increases the probability that consecutive recorded instants of expressions represent the same affective content. In this paper, we exploit such a relationship to improve the recognition performance of continuous naturalistic affective expressions. Using datasets of naturalistic affective expressions (AVEC 2011 audio and video dataset, PAINFUL video dataset) continuously labeled over time and over different dimensions, we analyze the transitions between levels of those dimensions (e.g., transitions in pain intensity level). We use an information theory approach to show that the transitions occur very slowly and hence suggest modeling them as first-order Markov models. The dimension levels are considered to be the hidden states in the Hidden Markov Model (HMM) framework. Their discrete transition and emission matrices are trained by using the labels provided with the training set. The recognition problem is converted into a best path-finding problem to obtain the best hidden states sequence in HMMs. This is a key difference from previous use of HMMs as classifiers. Modeling of the transitions between dimension levels is integrated in a multistage approach, where the first level performs a mapping between the affective expression features and a soft decision value (e.g., an affective dimension level), and further classification stages are modeled as HMMs that refine that mapping by taking into account the tempor