WorldWideScience

Sample records for facial expression processing

  1. Processing faces and facial expressions.

    Science.gov (United States)

    Posamentier, Mette T; Abdi, Hervé

    2003-09-01

    This paper reviews processing of facial identity and expressions. The issue of independence of these two systems for these tasks has been addressed from different approaches over the past 25 years. More recently, neuroimaging techniques have provided researchers with new tools to investigate how facial information is processed in the brain. First, findings from "traditional" approaches to identity and expression processing are summarized. The review then covers findings from neuroimaging studies on face perception, recognition, and encoding. Processing of the basic facial expressions is detailed in light of behavioral and neuroimaging data. Whereas data from experimental and neuropsychological studies support the existence of two systems, the neuroimaging literature yields a less clear picture because it shows considerable overlap in activation patterns in response to the different face-processing tasks. Further, activation patterns in response to facial expressions support the notion of involved neural substrates for processing different facial expressions.

  2. The Relationships between Processing Facial Identity, Emotional Expression, Facial Speech, and Gaze Direction during Development

    Science.gov (United States)

    Spangler, Sibylle M.; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding…

  3. The relationships between processing facial identity, emotional expression, facial speech, and gaze direction during development.

    Science.gov (United States)

    Spangler, Sibylle M; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding facial speech, emotional expression, and gaze direction or, alternatively, according to facial speech, emotional expression, and gaze direction while disregarding facial identity. Reaction times showed that children and adults were able to direct their attention selectively to facial identity despite variations of other kinds of face information, but when sorting according to facial speech and emotional expression, they were unable to ignore facial identity. In contrast, gaze direction could be processed independently of facial identity in all age groups. Apart from shorter reaction times and fewer classification errors, no substantial change in processing facial information was found to be correlated with age. We conclude that adult-like face processing routes are employed from 5 years of age onward.

  4. Face Processing in Children with Autism Spectrum Disorder: Independent or Interactive Processing of Facial Identity and Facial Expression?

    Science.gov (United States)

    Krebs, Julia F.; Biswas, Ajanta; Pascalis, Olivier; Kamp-Becker, Inge; Remschmidt, Helmuth; Schwarzer, Gudrun

    2011-01-01

    The current study investigated if deficits in processing emotional expression affect facial identity processing and vice versa in children with autism spectrum disorder. Children with autism and IQ and age matched typically developing children classified faces either by emotional expression, thereby ignoring facial identity or by facial identity…

  5. Facial identity and facial expression are initially integrated at visual perceptual stages of face processing.

    Science.gov (United States)

    Fisher, Katie; Towler, John; Eimer, Martin

    2016-01-08

    It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently.

  6. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception

    DEFF Research Database (Denmark)

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan

    2015-01-01

    The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolesce......-attentive change detection on social-emotional information.......The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent...

  7. Dissociating Face Identity and Facial Expression Processing Via Visual Adaptation

    Directory of Open Access Journals (Sweden)

    Hong Xu

    2012-10-01

    Full Text Available Face identity and facial expression are processed in two distinct neural pathways. However, most of the existing face adaptation literature studies them separately, despite the fact that they are two aspects from the same face. The current study conducted a systematic comparison between these two aspects by face adaptation, investigating how top- and bottom-half face parts contribute to the processing of face identity and facial expression. A real face (sad, “Adam” and its two size-equivalent face parts (top- and bottom-half were used as the adaptor in separate conditions. For face identity adaptation, the test stimuli were generated by morphing Adam's sad face with another person's sad face (“Sam”. For facial expression adaptation, the test stimuli were created by morphing Adam's sad face with his neutral face and morphing the neutral face with his happy face. In each trial, after exposure to the adaptor, observers indicated the perceived face identity or facial expression of the following test face via a key press. They were also tested in a baseline condition without adaptation. Results show that the top- and bottom-half face each generated a significant face identity aftereffect. However, the aftereffect by top-half face adaptation is much larger than that by the bottom-half face. On the contrary, only the bottom-half face generated a significant facial expression aftereffect. This dissociation of top- and bottom-half face adaptation suggests that face parts play different roles in face identity and facial expression. It thus provides further evidence for the distributed systems of face perception.

  8. Featural processing in recognition of emotional facial expressions.

    Science.gov (United States)

    Beaudry, Olivia; Roy-Charland, Annie; Perron, Melanie; Cormier, Isabelle; Tapp, Roxane

    2014-04-01

    The present study aimed to clarify the role played by the eye/brow and mouth areas in the recognition of the six basic emotions. In Experiment 1, accuracy was examined while participants viewed partial and full facial expressions; in Experiment 2, participants viewed full facial expressions while their eye movements were recorded. Recognition rates were consistent with previous research: happiness was highest and fear was lowest. The mouth and eye/brow areas were not equally important for the recognition of all emotions. More precisely, while the mouth was revealed to be important in the recognition of happiness and the eye/brow area of sadness, results are not as consistent for the other emotions. In Experiment 2, consistent with previous studies, the eyes/brows were fixated for longer periods than the mouth for all emotions. Again, variations occurred as a function of the emotions, the mouth having an important role in happiness and the eyes/brows in sadness. The general pattern of results for the other four emotions was inconsistent between the experiments as well as across different measures. The complexity of the results suggests that the recognition process of emotional facial expressions cannot be reduced to a simple feature processing or holistic processing for all emotions.

  9. Facial Expression Analysis

    NARCIS (Netherlands)

    Pantic, Maja; Li, S.; Jain, A.

    2009-01-01

    Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial compon

  10. Facial Expression Recognition

    NARCIS (Netherlands)

    Pantic, Maja; Li, S.; Jain, A.

    2009-01-01

    Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial

  11. Revisiting the Relationship between the Processing of Gaze Direction and the Processing of Facial Expression

    Science.gov (United States)

    Ganel, Tzvi

    2011-01-01

    There is mixed evidence on the nature of the relationship between the perception of gaze direction and the perception of facial expressions. Major support for shared processing of gaze and expression comes from behavioral studies that showed that observers cannot process expression or gaze and ignore irrelevant variations in the other dimension.…

  12. Processing of individual items during ensemble coding of facial expressions

    Directory of Open Access Journals (Sweden)

    Huiyun Li

    2016-09-01

    Full Text Available There is growing evidence that human observers are able to extract the mean emotion or other type of information from a set of faces. The most intriguing aspect of this phenomenon is that observers often fail to identify or form a representation for individual faces in a face set. However, most of these results were based on judgments under limited processing resource. We examined a wider range of exposure time and observed how the relationship between the extraction of a mean and representation of individual facial expressions would change. The results showed that with an exposure time of 50 milliseconds for the faces, observers were more sensitive to mean representation over individual representation, replicating the typical findings in the literature. With longer exposure time, however, observers were able to extract both individual and mean representation more accurately. Furthermore, diffusion model analysis revealed that the mean representation is also more prone to suffer from the noise accumulated in redundant processing time and leads to a more conservative decision bias, whereas individual representations seem more resistant to this noise. Results suggest that the encoding of emotional information from multiple faces may take two forms: single face processing and crowd face processing.

  13. Discriminative shared Gaussian processes for multiview and view-invariant facial expression recognition.

    Science.gov (United States)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2015-01-01

    Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers learned separately for each view or a single classifier learned for all views. However, these approaches ignore the fact that different views of a facial expression are just different manifestations of the same facial expression. By accounting for this redundancy, we can design more effective classifiers for the target task. To this end, we propose a discriminative shared Gaussian process latent variable model (DS-GPLVM) for multiview and view-invariant classification of facial expressions from multiple views. In this model, we first learn a discriminative manifold shared by multiple views of a facial expression. Subsequently, we perform facial expression classification in the expression manifold. Finally, classification of an observed facial expression is carried out either in the view-invariant manner (using only a single view of the expression) or in the multiview manner (using multiple views of the expression). The proposed model can also be used to perform fusion of different facial features in a principled manner. We validate the proposed DS-GPLVM on both posed and spontaneously displayed facial expressions from three publicly available datasets (MultiPIE, labeled face parts in the wild, and static facial expressions in the wild). We show that this model outperforms the state-of-the-art methods for multiview and view-invariant facial expression classification, and several state-of-the-art methods for multiview learning and feature fusion.

  14. Reduced capacity in automatic processing of facial expression in restrictive anorexia nervosa and obesity

    NARCIS (Netherlands)

    Cserjesi, Renata; Vermeulen, Nicolas; Lenard, Laszlo; Luminet, Olivier

    2011-01-01

    There is growing evidence that disordered eating is associated with facial expression recognition and emotion processing problems. In this study, we investigated the question of whether anorexia and obesity occur on a continuum of attention bias towards negative facial expressions in comparison with

  15. Processing facial expressions of emotion: upright vs. inverted images

    Directory of Open Access Journals (Sweden)

    David eBimler

    2013-02-01

    Full Text Available We studied discrimination of briefly presented Upright vs. Inverted emotional facial expressions (FEs, hypothesising that inversion would impair emotion decoding by disrupting holistic FE processing. Stimuli were photographs of seven emotion prototypes, of a male and female poser (Ekman and Friesen, 1976, and eight intermediate morphs in each set. Subjects made speeded Same/Different judgements of emotional content for all Upright (U or Inverted (I pairs of FEs, presented for 500 ms, 100 times each pair. Signal Detection Theory revealed the sensitivity measure d' to be slightly but significantly higher for the Upright FEs. In further analysis using multidimensional scaling (MDS, percentages of Same judgements were taken as an index of pairwise perceptual similarity, separately for U and I presentation mode. The outcome was a 4D ‘emotion expression space’, with FEs represented as points and the dimensions identified as Happy–Sad, Surprise/Fear, Disgust and Anger. The solutions for U and I FEs were compared by means of cophenetic and canonical correlation, Procrustes analysis and weighted-Euclidean analysis of individual difference. Differences in discrimination produced by inverting FE stimuli were found to be small and manifested as minor changes in the MDS structure or weights of the dimensions. Solutions differed substantially more between the two posers, however. Notably, for stimuli containing elements of Happiness (whether U or I, the MDS structure revealed some signs of categorical perception, indicating that mouth curvature – the dominant feature conveying Happiness – is visually salient and receives early processing. The findings suggest that for briefly-presented FEs, Same/Different decisions are dominated by low-level visual analysis of abstract patterns of lightness and edge filters, but also reflect emerging featural analysis. These analyses, insensitive to face orientation, enable initial positive/negative Valence

  16. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions

    OpenAIRE

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed t...

  17. Can we distinguish emotions from faces? Investigation of implicit and explicit processes of peak facial expressions

    Directory of Open Access Journals (Sweden)

    Yanmei Wang

    2016-08-01

    Full Text Available Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes

  18. A detailed investigation of facial expression processing in congenital prosopagnosia as compared to acquired prosopagnosia.

    Science.gov (United States)

    Humphreys, Kate; Avidan, Galia; Behrmann, Marlene

    2007-01-01

    Whether the ability to recognize facial expression can be preserved in the absence of the recognition of facial identity remains controversial. The current study reports the results of a detailed investigation of facial expression recognition in three congenital prosopagnosic (CP) participants, in comparison with two patients with acquired prosopagnosia (AP) and a large group of 30 neurologically normal participants, including individually age- and gender-matched controls. Participants completed a fine-grained expression recognition paradigm requiring a six-alternative forced-choice response to continua of morphs of six different basic facial expressions (e.g. happiness and surprise). Accuracy, sensitivity and reaction times were measured. The performance of all three CP individuals was indistinguishable from that of controls, even for the most subtle expressions. In contrast, both individuals with AP displayed pronounced difficulties with the majority of expressions. The results from the CP participants attest to the dissociability of the processing of facial identity and of facial expression. Whether this remarkably good expression recognition is achieved through normal, or compensatory, mechanisms remains to be determined. Either way, this normal level of performance does not extend to include facial identity.

  19. Long-term academic stress enhances early processing of facial expressions.

    Science.gov (United States)

    Zhang, Liang; Qin, Shaozheng; Yao, Zhuxi; Zhang, Kan; Wu, Jianhui

    2016-11-01

    Exposure to long-term stress can lead to a variety of emotional and behavioral problems. Although widely investigated, the neural basis of how long-term stress impacts emotional processing in humans remains largely elusive. Using event-related brain potentials (ERPs), we investigated the effects of long-term stress on the neural dynamics of emotionally facial expression processing. Thirty-nine male college students undergoing preparation for a major examination and twenty-one matched controls performed a gender discrimination task for faces displaying angry, happy, and neutral expressions. The results of the Perceived Stress Scale showed that participants in the stress group perceived higher levels of long-term stress relative to the control group. ERP analyses revealed differential effects of long-term stress on two early stages of facial expression processing: 1) long-term stress generally augmented posterior P1 amplitudes to facial stimuli irrespective of expression valence, suggesting that stress can increase sensitization to visual inputs in general, and 2) long-term stress selectively augmented fronto-central P2 amplitudes for angry but not for neutral or positive facial expressions, suggesting that stress may lead to increased attentional prioritization to processing negative emotional stimuli. Together, our findings suggest that long-term stress has profound impacts on the early stages of facial expression processing, with an increase at the very early stage of general information inputs and a subsequent attentional bias toward processing emotionally negative stimuli.

  20. [Prosopagnosia and facial expression recognition].

    Science.gov (United States)

    Koyama, Shinichi

    2014-04-01

    This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.

  1. Unconscious Processing of Facial Expressions in Individuals with Internet Gaming Disorder

    Directory of Open Access Journals (Sweden)

    Xiaozhe Peng

    2017-06-01

    Full Text Available Internet Gaming Disorder (IGD is characterized by impairments in social communication and the avoidance of social contact. Facial expression processing is the basis of social communication. However, few studies have investigated how individuals with IGD process facial expressions, and whether they have deficits in emotional facial processing remains unclear. The aim of the present study was to explore these two issues by investigating the time course of emotional facial processing in individuals with IGD. A backward masking task was used to investigate the differences between individuals with IGD and normal controls (NC in the processing of subliminally presented facial expressions (sad, happy, and neutral with event-related potentials (ERPs. The behavioral results showed that individuals with IGD are slower than NC in response to both sad and neutral expressions in the sad–neutral context. The ERP results showed that individuals with IGD exhibit decreased amplitudes in ERP component N170 (an index of early face processing in response to neutral expressions compared to happy expressions in the happy–neutral expressions context, which might be due to their expectancies for positive emotional content. The NC, on the other hand, exhibited comparable N170 amplitudes in response to both happy and neutral expressions in the happy–neutral expressions context, as well as sad and neutral expressions in the sad–neutral expressions context. Both individuals with IGD and NC showed comparable ERP amplitudes during the processing of sad expressions and neutral expressions. The present study revealed that individuals with IGD have different unconscious neutral facial processing patterns compared with normal individuals and suggested that individuals with IGD may expect more positive emotion in the happy–neutral expressions context.Highlights:• The present study investigated whether the unconscious processing of facial expressions is influenced by

  2. Retinotopy of facial expression adaptation.

    Science.gov (United States)

    Matsumiya, Kazumichi

    2014-01-01

    The face aftereffect (FAE; the illusion of faces after adaptation to a face) has been reported to occur without retinal overlap between adaptor and test, but recent studies revealed that the FAE is not constant across all test locations, which suggests that the FAE is also retinotopic. However, it remains unclear whether the characteristic of the retinotopy of the FAE for one facial aspect is the same as that of the FAE for another facial aspect. In the research reported here, an examination of the retinotopy of the FAE for facial expression indicated that the facial expression aftereffect occurs without retinal overlap between adaptor and test, and depends on the retinal distance between them. Furthermore, the results indicate that, although dependence of the FAE on adaptation-test distance is similar between facial expression and facial identity, the FAE for facial identity is larger than that for facial expression when a test face is presented in the opposite hemifield. On the basis of these results, I discuss adaptation mechanisms underlying facial expression processing and facial identity processing for the retinotopy of the FAE.

  3. Putting the face in context: Body expressions impact facial emotion processing in human infants

    Directory of Open Access Journals (Sweden)

    Purva Rajhans

    2016-06-01

    Full Text Available Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs. We primed infants with body postures (fearful, happy that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception.

  4. Cradling Side Preference Is Associated with Lateralized Processing of Baby Facial Expressions in Females

    Science.gov (United States)

    Huggenberger, Harriet J.; Suter, Susanne E.; Reijnen, Ester; Schachinger, Hartmut

    2009-01-01

    Women's cradling side preference has been related to contralateral hemispheric specialization of processing emotional signals; but not of processing baby's facial expression. Therefore, 46 nulliparous female volunteers were characterized as left or non-left holders (HG) during a doll holding task. During a signal detection task they were then…

  5. Are event-related potentials to dynamic facial expressions of emotion related to individual differences in the accuracy of processing facial expressions and identity?

    Science.gov (United States)

    Recio, Guillermo; Wilhelm, Oliver; Sommer, Werner; Hildebrandt, Andrea

    2017-04-01

    Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain-behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = -.51) and memory (r = -.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.

  6. Processing of Facial Expressions of Emotions by Adults with Down Syndrome and Moderate Intellectual Disability

    Science.gov (United States)

    Carvajal, Fernando; Fernandez-Alcaraz, Camino; Rueda, Maria; Sarrion, Louise

    2012-01-01

    The processing of facial expressions of emotions by 23 adults with Down syndrome and moderate intellectual disability was compared with that of adults with intellectual disability of other etiologies (24 matched in cognitive level and 26 with mild intellectual disability). Each participant performed 4 tasks of the Florida Affect Battery and an…

  7. Holistic facial expression classification

    Science.gov (United States)

    Ghent, John; McDonald, J.

    2005-06-01

    This paper details a procedure for classifying facial expressions. This is a growing and relatively new type of problem within computer vision. One of the fundamental problems when classifying facial expressions in previous approaches is the lack of a consistent method of measuring expression. This paper solves this problem by the computation of the Facial Expression Shape Model (FESM). This statistical model of facial expression is based on an anatomical analysis of facial expression called the Facial Action Coding System (FACS). We use the term Action Unit (AU) to describe a movement of one or more muscles of the face and all expressions can be described using the AU's described by FACS. The shape model is calculated by marking the face with 122 landmark points. We use Principal Component Analysis (PCA) to analyse how the landmark points move with respect to each other and to lower the dimensionality of the problem. Using the FESM in conjunction with Support Vector Machines (SVM) we classify facial expressions. SVMs are a powerful machine learning technique based on optimisation theory. This project is largely concerned with statistical models, machine learning techniques and psychological tools used in the classification of facial expression. This holistic approach to expression classification provides a means for a level of interaction with a computer that is a significant step forward in human-computer interaction.

  8. The Thatcher illusion reveals orientation dependence in brain regions involved in processing facial expressions.

    Science.gov (United States)

    Psalta, Lilia; Young, Andrew W; Thompson, Peter; Andrews, Timothy J

    2014-01-01

    Although the processing of facial identity is known to be sensitive to the orientation of the face, it is less clear whether orientation sensitivity extends to the processing of facial expressions. To address this issue, we used functional MRI (fMRI) to measure the neural response to the Thatcher illusion. This illusion involves a local inversion of the eyes and mouth in a smiling face-when the face is upright, the inverted features make it appear grotesque, but when the face is inverted, the inversion is no longer apparent. Using an fMRI-adaptation paradigm, we found a release from adaptation in the superior temporal sulcus-a region directly linked to the processing of facial expressions-when the images were upright and they changed from a normal to a Thatcherized configuration. However, this release from adaptation was not evident when the faces were inverted. These results show that regions involved in processing facial expressions display a pronounced orientation sensitivity.

  9. Facial expression and sarcasm.

    Science.gov (United States)

    Rockwell, P

    2001-08-01

    This study examined facial expression in the presentation of sarcasm. 60 responses (sarcastic responses = 30, nonsarcastic responses = 30) from 40 different speakers were coded by two trained coders. Expressions in three facial areas--eyebrow, eyes, and mouth--were evaluated. Only movement in the mouth area significantly differentiated ratings of sarcasm from nonsarcasm.

  10. Identification and intensity of disgust: Distinguishing visual, linguistic and facial expressions processing in Parkinson disease.

    Science.gov (United States)

    Sedda, Anna; Petito, Sara; Guarino, Maria; Stracciari, Andrea

    2017-07-14

    Most of the studies since now show an impairment for facial displays of disgust recognition in Parkinson disease. A general impairment in disgust processing in patients with Parkinson disease might adversely affect their social interactions, given the relevance of this emotion for human relations. However, despite the importance of faces, disgust is also expressed through other format of visual stimuli such as sentences and visual images. The aim of our study was to explore disgust processing in a sample of patients affected by Parkinson disease, by means of various tests tackling not only facial recognition but also other format of visual stimuli through which disgust can be recognized. Our results confirm that patients are impaired in recognizing facial displays of disgust. Further analyses show that patients are also impaired and slower for other facial expressions, with the only exception of happiness. Notably however, patients with Parkinson disease processed visual images and sentences as controls. Our findings show a dissociation within different formats of visual stimuli of disgust, suggesting that Parkinson disease is not characterized by a general compromising of disgust processing, as often suggested. The involvement of the basal ganglia-frontal cortex system might spare some cognitive components of emotional processing, related to memory and culture, at least for disgust. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study.

    Directory of Open Access Journals (Sweden)

    Tongran Liu

    Full Text Available The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8 and happy expressions were deviant stimuli (p = 0.2, and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8 and fearful expressions were deviant stimuli (p = 0.2. Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs were obtained during the tasks. The visual mismatch negativity (vMMN components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50-130 ms, the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320-450 ms, the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information.

  12. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study.

    Science.gov (United States)

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan; Shi, Jiannong

    2015-01-01

    The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8) and happy expressions were deviant stimuli (p = 0.2), and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8) and fearful expressions were deviant stimuli (p = 0.2). Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs) were obtained during the tasks. The visual mismatch negativity (vMMN) components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50-130 ms), the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320-450 ms), the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information.

  13. Perceptual, Categorical, and Affective Processing of Ambiguous Smiling Facial Expressions

    Science.gov (United States)

    Calvo, Manuel G.; Fernandez-Martin, Andres; Nummenmaa, Lauri

    2012-01-01

    Why is a face with a smile but non-happy eyes likely to be interpreted as happy? We used blended expressions in which a smiling mouth was incongruent with the eyes (e.g., angry eyes), as well as genuine expressions with congruent eyes and mouth (e.g., both happy or angry). Tasks involved detection of a smiling mouth (perceptual), categorization of…

  14. Facial Asymmetry and Emotional Expression

    CERN Document Server

    Pickin, Andrew

    2011-01-01

    This report is about facial asymmetry, its connection to emotional expression, and methods of measuring facial asymmetry in videos of faces. The research was motivated by two factors: firstly, there was a real opportunity to develop a novel measure of asymmetry that required minimal human involvement and that improved on earlier measures in the literature; and secondly, the study of the relationship between facial asymmetry and emotional expression is both interesting in its own right, and important because it can inform neuropsychological theory and answer open questions concerning emotional processing in the brain. The two aims of the research were: first, to develop an automatic frame-by-frame measure of facial asymmetry in videos of faces that improved on previous measures; and second, to use the measure to analyse the relationship between facial asymmetry and emotional expression, and connect our findings with previous research of the relationship.

  15. Emotional Processing, Recognition, Empathy and Evoked Facial Expression in Eating Disorders: An Experimental Study to Map Deficits in Social Cognition

    National Research Council Canada - National Science Library

    Cardi, Valentina; Corfield, Freya; Leppanen, Jenni; Rhind, Charlotte; Deriziotis, Stephanie; Hadjimichalis, Alexandra; Hibbs, Rebecca; Micali, Nadia; Treasure, Janet

    2015-01-01

    .... The aim of this study is to examine distinct processes of social-cognition in this patient group, including attentional processing and recognition, empathic reaction and evoked facial expression...

  16. PCA facial expression recognition

    Science.gov (United States)

    El-Hori, Inas H.; El-Momen, Zahraa K.; Ganoun, Ali

    2013-12-01

    This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. The comparative study of Facial Expression Recognition (FER) techniques namely Principal Component's analysis (PCA) and PCA with Gabor filters (GF) is done. The objective of this research is to show that PCA with Gabor filters is superior to the first technique in terms of recognition rate. To test and evaluates their performance, experiments are performed using real database by both techniques. The universally accepted five principal emotions to be recognized are: Happy, Sad, Disgust and Angry along with Neutral. The recognition rates are obtained on all the facial expressions.

  17. Shared Gaussian Process Latent Variable Model for Multi-view Facial Expression Recognition

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2013-01-01

    Facial-expression data often appear in multiple views either due to head-movements or the camera position. Existing methods for multi-view facial expression recognition perform classification of the target expressions either by using classifiers learned separately for each view or by using a single

  18. Discriminative shared Gaussian processes for multi-view and view-invariant facial expression recognition

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers

  19. Discriminative shared Gaussian processes for multi-view and view-invariant facial expression recognition

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2015-01-01

    Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers

  20. Dissimilar processing of emotional facial expressions in human and monkey temporal cortex.

    Science.gov (United States)

    Zhu, Qi; Nelissen, Koen; Van den Stock, Jan; De Winter, François-Laurent; Pauwels, Karl; de Gelder, Beatrice; Vanduffel, Wim; Vandenbulcke, Mathieu

    2013-02-01

    Emotional facial expressions play an important role in social communication across primates. Despite major progress made in our understanding of categorical information processing such as for objects and faces, little is known, however, about how the primate brain evolved to process emotional cues. In this study, we used functional magnetic resonance imaging (fMRI) to compare the processing of emotional facial expressions between monkeys and humans. We used a 2×2×2 factorial design with species (human and monkey), expression (fear and chewing) and configuration (intact versus scrambled) as factors. At the whole brain level, neural responses to conspecific emotional expressions were anatomically confined to the superior temporal sulcus (STS) in humans. Within the human STS, we found functional subdivisions with a face-selective right posterior STS area that also responded to emotional expressions of other species and a more anterior area in the right middle STS that responded specifically to human emotions. Hence, we argue that the latter region does not show a mere emotion-dependent modulation of activity but is primarily driven by human emotional facial expressions. Conversely, in monkeys, emotional responses appeared in earlier visual cortex and outside face-selective regions in inferior temporal cortex that responded also to multiple visual categories. Within monkey IT, we also found areas that were more responsive to conspecific than to non-conspecific emotional expressions but these responses were not as specific as in human middle STS. Overall, our results indicate that human STS may have developed unique properties to deal with social cues such as emotional expressions.

  1. Does Gaze Direction Modulate Facial Expression Processing in Children with Autism Spectrum Disorder?

    Science.gov (United States)

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent…

  2. Neuroticism delays detection of facial expressions

    OpenAIRE

    Sawada, Reiko; Sato, Wataru; Uono, Shota; Kochiyama, Takanori; Kubota, Yasutaka; Yoshimura, Sayaka; Toichi, Motomi

    2016-01-01

    The rapid detection of emotional signals from facial expressions is fundamental for human social interaction. The personality factor of neuroticism modulates the processing of various types of emotional facial expressions; however, its effect on the detection of emotional facial expressions remains unclear. In this study, participants with high- and low-neuroticism scores performed a visual search task to detect normal expressions of anger and happiness, and their anti-expressions within a cr...

  3. Assessing Pain by Facial Expression: Facial Expression as Nexus

    OpenAIRE

    Prkachin, Kenneth M.

    2009-01-01

    The experience of pain is often represented by changes in facial expression. Evidence of pain that is available from facial expression has been the subject of considerable scientific investigation. The present paper reviews the history of pain assessment via facial expression in the context of a model of pain expression as a nexus connecting internal experience with social influence. Evidence about the structure of facial expressions of pain across the lifespan is reviewed. Applications of fa...

  4. Neuroticism Delays Detection of Facial Expressions.

    Science.gov (United States)

    Sawada, Reiko; Sato, Wataru; Uono, Shota; Kochiyama, Takanori; Kubota, Yasutaka; Yoshimura, Sayaka; Toichi, Motomi

    2016-01-01

    The rapid detection of emotional signals from facial expressions is fundamental for human social interaction. The personality factor of neuroticism modulates the processing of various types of emotional facial expressions; however, its effect on the detection of emotional facial expressions remains unclear. In this study, participants with high- and low-neuroticism scores performed a visual search task to detect normal expressions of anger and happiness, and their anti-expressions within a crowd of neutral expressions. Anti-expressions contained an amount of visual changes equivalent to those found in normal expressions compared to neutral expressions, but they were usually recognized as neutral expressions. Subjective emotional ratings in response to each facial expression stimulus were also obtained. Participants with high-neuroticism showed an overall delay in the detection of target facial expressions compared to participants with low-neuroticism. Additionally, the high-neuroticism group showed higher levels of arousal to facial expressions compared to the low-neuroticism group. These data suggest that neuroticism modulates the detection of emotional facial expressions in healthy participants; high levels of neuroticism delay overall detection of facial expressions and enhance emotional arousal in response to facial expressions.

  5. Decline of executive processes affects identification of emotional facial expression in aging.

    Science.gov (United States)

    García-Rodríguez, Beatriz; Fusari, Anna; Fernández-Guinea, Sara; Frank, Ana; Molina, José Antonio; Ellgring, Heiner

    2011-02-01

    The current study examined the hypothesis that old people have a selective deficit in the identification of emotional facial expressions (EFEs) when the task conditions require the mechanism of the central executive. We have used a Dual Task (DT) paradigm to assess the role of visuo-spatial interference of working memory when processing emotional faces under two conditions: DT at encoding and DT at retrieval. Previous studies have revealed a loss of the ability to identify specific emotional facial expressions (EFEs) in old age. This has been consistently associated with a decline of the ability to coordinate the performance of two tasks concurrently. Working memory is usually tested using DT paradigms. Regarding to aging, there is evidence that with DT performance during encoding the costs are substantial. In contrast, the introduction of a secondary task after the primary task (i.e. at retrieval), had less detrimental effects on primary task performance in either younger or older adults. Our results demonstrate that aging is associated with higher DT costs when EFEs are identified concurrently with a visuo-spatial task. In contrast, there was not a significant age-related decline when the two tasks were presented sequentially. This suggests a deficit of the central executive rather than visuo-spatial memory deficits. The current data provide further support for the hypothesis that emotional processing is "top-down" controlled, and suggest that the deficits in emotional processing of old people depend, above all, on specific cognitive impairment.

  6. The influence of indirect and direct emotional processing on memory for facial expressions.

    Science.gov (United States)

    Patel, Ronak; Girard, Todd A; Green, Robin E A

    2012-01-01

    We used the remember-know procedure (Tulving, 1985 ) to test the behavioural expression of memory following indirect and direct forms of emotional processing at encoding. Participants (N=32) viewed a series of facial expressions (happy, fearful, angry, and neutral) while performing tasks involving either indirect (gender discrimination) or direct (emotion discrimination) emotion processing. After a delay, participants completed a surprise recognition memory test. Our results revealed that indirect encoding of emotion produced enhanced memory for fearful faces whereas direct encoding of emotion produced enhanced memory for angry faces. In contrast, happy faces were better remembered than neutral faces after both indirect and direct encoding tasks. These findings suggest that fearful and angry faces benefit from a recollective advantage when they are encoded in a way that is consistent with the predictive nature of their threat. We propose that the broad memory advantage for happy faces may reflect a form of cognitive flexibility that is specific to positive emotions.

  7. Perception of facial expression and facial identity in subjects with social developmental disorders.

    Science.gov (United States)

    Hefter, Rebecca L; Manoach, Dara S; Barton, Jason J S

    2005-11-22

    It has been hypothesized that the social dysfunction in social developmental disorders (SDDs), such as autism, Asperger disorder, and the socioemotional processing disorder, impairs the acquisition of normal face-processing skills. The authors investigated whether this purported perceptual deficit was generalized to both facial expression and facial identity or whether these different types of facial perception were dissociated in SDDs. They studied 26 adults with a variety of SDD diagnoses, assessing their ability to discriminate famous from anonymous faces, their perception of emotional expression from facial and nonfacial cues, and the relationship between these abilities. They also compared the performance of two defined subgroups of subjects with SDDs on expression analysis: one with normal and one with impaired recognition of facial identity. While perception of facial expression was related to the perception of nonfacial expression, the perception of facial identity was not related to either facial or nonfacial expression. Likewise, subjects with SDDs with impaired facial identity processing perceived facial expression as well as those with normal facial identity processing. The processing of facial identity and that of facial expression are dissociable in social developmental disorders. Deficits in perceiving facial expression may be related to emotional processing more than face processing. Dissociations between the perception of facial identity and facial emotion are consistent with current cognitive models of face processing. The results argue against hypotheses that the social dysfunction in social developmental disorder causes a generalized failure to acquire face-processing skills.

  8. Impaired social brain network for processing dynamic facial expressions in autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Sato Wataru

    2012-08-01

    Full Text Available Abstract Background Impairment of social interaction via facial expressions represents a core clinical feature of autism spectrum disorders (ASD. However, the neural correlates of this dysfunction remain unidentified. Because this dysfunction is manifested in real-life situations, we hypothesized that the observation of dynamic, compared with static, facial expressions would reveal abnormal brain functioning in individuals with ASD. We presented dynamic and static facial expressions of fear and happiness to individuals with high-functioning ASD and to age- and sex-matched typically developing controls and recorded their brain activities using functional magnetic resonance imaging (fMRI. Result Regional analysis revealed reduced activation of several brain regions in the ASD group compared with controls in response to dynamic versus static facial expressions, including the middle temporal gyrus (MTG, fusiform gyrus, amygdala, medial prefrontal cortex, and inferior frontal gyrus (IFG. Dynamic causal modeling analyses revealed that bi-directional effective connectivity involving the primary visual cortex–MTG–IFG circuit was enhanced in response to dynamic as compared with static facial expressions in the control group. Group comparisons revealed that all these modulatory effects were weaker in the ASD group than in the control group. Conclusions These results suggest that weak activity and connectivity of the social brain network underlie the impairment in social interaction involving dynamic facial expressions in individuals with ASD.

  9. Fixation to features and neural processing of facial expressions in a gender discrimination task.

    Science.gov (United States)

    Neath, Karly N; Itier, Roxane J

    2015-10-01

    Early face encoding, as reflected by the N170 ERP component, is sensitive to fixation to the eyes. Whether this sensitivity varies with facial expressions of emotion and can also be seen on other ERP components such as P1 and EPN, was investigated. Using eye-tracking to manipulate fixation on facial features, we found the N170 to be the only eye-sensitive component and this was true for fearful, happy and neutral faces. A different effect of fixation to features was seen for the earlier P1 that likely reflected general sensitivity to face position. An early effect of emotion (∼120 ms) for happy faces was seen at occipital sites and was sustained until ∼350 ms post-stimulus. For fearful faces, an early effect was seen around 80 ms followed by a later effect appearing at ∼150 ms until ∼300 ms at lateral posterior sites. Results suggests that in this emotion-irrelevant gender discrimination task, processing of fearful and happy expressions occurred early and largely independently of the eye-sensitivity indexed by the N170. Processing of the two emotions involved different underlying brain networks active at different times.

  10. Does gaze direction modulate facial expression processing in children with autism spectrum disorder?

    Science.gov (United States)

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent motivational tendency (i.e., an avoidant facial expression with averted eye gaze) than those with an incongruent motivational tendency. Children with ASD (9-14 years old; n = 14) were not affected by the gaze direction of facial stimuli. This finding was replicated in Experiment 2, which presented only the eye region of the face to typically developing children (n = 10) and children with ASD (n = 10). These results demonstrated that children with ASD do not encode and/or integrate multiple communicative signals based on their affective or motivational tendency.

  11. Brain regions involved in processing facial identity and expression are differentially selective for surface and edge information.

    Science.gov (United States)

    Harris, Richard J; Young, Andrew W; Andrews, Timothy J

    2014-08-15

    Although different brain regions are widely considered to be involved in the recognition of facial identity and expression, it remains unclear how these regions process different properties of the visual image. Here, we ask how surface-based reflectance information and edge-based shape cues contribute to the perception and neural representation of facial identity and expression. Contrast-reversal was used to generate images in which normal contrast relationships across the surface of the image were disrupted, but edge information was preserved. In a behavioural experiment, contrast-reversal significantly attenuated judgements of facial identity, but only had a marginal effect on judgements of expression. An fMR-adaptation paradigm was then used to ask how brain regions involved in the processing of identity and expression responded to blocks comprising all normal, all contrast-reversed, or a mixture of normal and contrast-reversed faces. Adaptation in the posterior superior temporal sulcus--a region directly linked with processing facial expression--was relatively unaffected by mixing normal with contrast-reversed faces. In contrast, the response of the fusiform face area--a region linked with processing facial identity--was significantly affected by contrast-reversal. These results offer a new perspective on the reasons underlying the neural segregation of facial identity and expression in which brain regions involved in processing invariant aspects of faces, such as identity, are very sensitive to surface-based cues, whereas regions involved in processing changes in faces, such as expression, are relatively dependent on edge-based cues.

  12. Biological Computation Indexes of Brain Oscillations in Unattended Facial Expression Processing Based on Event-Related Synchronization/Desynchronization

    Directory of Open Access Journals (Sweden)

    Bo Yu

    2016-01-01

    Full Text Available Estimation of human emotions from Electroencephalogram (EEG signals plays a vital role in affective Brain Computer Interface (BCI. The present study investigated the different event-related synchronization (ERS and event-related desynchronization (ERD of typical brain oscillations in processing Facial Expressions under nonattentional condition. The results show that the lower-frequency bands are mainly used to update Facial Expressions and distinguish the deviant stimuli from the standard ones, whereas the higher-frequency bands are relevant to automatically processing different Facial Expressions. Accordingly, we set up the relations between each brain oscillation and processing unattended Facial Expressions by the measures of ERD and ERS. This research first reveals the contributions of each frequency band for comprehension of Facial Expressions in preattentive stage. It also evidences that participants have emotional experience under nonattentional condition. Therefore, the user’s emotional state under nonattentional condition can be recognized in real time by the ERD/ERS computation indexes of different frequency bands of brain oscillations, which can be used in affective BCI to provide the user with more natural and friendly ways.

  13. Shades of Emotion: What the Addition of Sunglasses or Masks to Faces Reveals about the Development of Facial Expression Processing

    Science.gov (United States)

    Roberson, Debi; Kikutani, Mariko; Doge, Paula; Whitaker, Lydia; Majid, Asifa

    2012-01-01

    Three studies investigated developmental changes in facial expression processing, between 3 years-of-age and adulthood. For adults and older children, the addition of sunglasses to upright faces caused an equivalent decrement in performance to face inversion. However, younger children showed "better" classification of expressions of faces wearing…

  14. The Effect of Gaze Direction on the Processing of Facial Expressions in Children with Autism Spectrum Disorder: An ERP Study

    Science.gov (United States)

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2010-01-01

    This study investigated the neural basis of the effect of gaze direction on facial expression processing in children with and without ASD, using event-related potential (ERP). Children with ASD (10-17-year olds) and typically developing (TD) children (9-16-year olds) were asked to determine the emotional expressions (anger or fearful) of a facial…

  15. Shades of Emotion: What the Addition of Sunglasses or Masks to Faces Reveals about the Development of Facial Expression Processing

    Science.gov (United States)

    Roberson, Debi; Kikutani, Mariko; Doge, Paula; Whitaker, Lydia; Majid, Asifa

    2012-01-01

    Three studies investigated developmental changes in facial expression processing, between 3 years-of-age and adulthood. For adults and older children, the addition of sunglasses to upright faces caused an equivalent decrement in performance to face inversion. However, younger children showed "better" classification of expressions of faces wearing…

  16. Conscious and unconscious processing of facial expressions: evidence from two split-brain patients.

    Science.gov (United States)

    Prete, Giulia; D'Ascenzo, Stefania; Laeng, Bruno; Fabri, Mara; Foschi, Nicoletta; Tommasi, Luca

    2015-03-01

    We investigated how the brain's hemispheres process explicit and implicit facial expressions in two 'split-brain' patients (one with a complete and one with a partial anterior resection). Photographs of faces expressing positive, negative or neutral emotions were shown either centrally or bilaterally. The task consisted in judging the friendliness of each person in the photographs. Half of the photograph stimuli were 'hybrid faces', that is an amalgamation of filtered images which contained emotional information only in the low range of spatial frequency, blended to a neutral expression of the same individual in the rest of the spatial frequencies. The other half of the images contained unfiltered faces. With the hybrid faces the patients and a matched control group were more influenced in their social judgements by the emotional expression of the face shown in the left visual field (LVF). When the expressions were shown explicitly, that is without filtering, the control group and the partially callosotomized patient based their judgement on the face shown in the LVF, whereas the complete split-brain patient based his ratings mainly on the face presented in the right visual field. We conclude that the processing of implicit emotions does not require the integrity of callosal fibres and can take place within subcortical routes lateralized in the right hemisphere.

  17. Brain lateralization of holistic versus analytic processing of emotional facial expressions.

    Science.gov (United States)

    Calvo, Manuel G; Beltrán, David

    2014-05-15

    This study investigated the neurocognitive mechanisms underlying the role of the eye and the mouth regions in the recognition of facial happiness, anger, and surprise. To this end, face stimuli were shown in three formats (whole face, upper half visible, and lower half visible) and behavioral categorization, computational modeling, and ERP (event-related potentials) measures were combined. N170 (150-180 ms post-stimulus; right hemisphere) and EPN (early posterior negativity; 200-300 ms; mainly, right hemisphere) were modulated by expression of whole faces, but not by separate halves. This suggests that expression encoding (N170) and emotional assessment (EPN) require holistic processing, mainly in the right hemisphere. In contrast, the mouth region of happy faces enhanced left temporo-occipital activity (150-180 ms), and also the LPC (late positive complex; centro-parietal) activity (350-450 ms) earlier than the angry eyes (450-600 ms) or other face regions. Relatedly, computational modeling revealed that the mouth region of happy faces was also visually salient by 150 ms following stimulus onset. This suggests that analytical or part-based processing of the salient smile occurs early (150-180 ms) and lateralized (left), and is subsequently used as a shortcut to identify the expression of happiness (350-450 ms). This would account for the happy face advantage in behavioral recognition tasks when the smile is visible. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Happy facial expression processing with different social interaction cues: an fMRI study of individuals with schizotypal personality traits.

    Science.gov (United States)

    Huang, Jia; Wang, Yi; Jin, Zhen; Di, Xin; Yang, Tao; Gur, Ruben C; Gur, Raquel E; Shum, David H K; Cheung, Eric F C; Chan, Raymond C K

    2013-07-01

    In daily life facial expressions change rapidly and the direction of change provides important clues about social interaction. The aim of conducting this study was to elucidate the dynamic happy facial expression processing with different social interaction cues in individuals with (n=14) and without (n=14) schizotypal personality disorder (SPD) traits. Using functional magnetic resonance imaging (fMRI), dynamic happy facial expression processing was examined by presenting video clips depicting happiness appearing and disappearing under happiness inducing ('praise') or reducing ('blame') interaction cues. The happiness appearing condition consistently elicited more brain activations than the happiness disappearing condition in the posterior cingulate bilaterally in all participants. Further analyses showed that the SPD group was less deactivated than the non-SPD group in the right anterior cingulate cortex in the happiness appearing-disappearing contrast. The SPD group deactivated more than the non-SPD group in the left posterior cingulate and right superior temporal gyrus in the praise-blame contrast. Moreover, the incongruence of cues and facial expression activated the frontal-thalamus-caudate-parietal network, which is involved in emotion recognition and conflict resolution. These results shed light on the neural basis of social interaction deficits in individuals with schizotypal personality traits.

  19. Robust facial expression recognition via compressive sensing.

    Science.gov (United States)

    Zhang, Shiqing; Zhao, Xiaoming; Lei, Bicheng

    2012-01-01

    Recently, compressive sensing (CS) has attracted increasing attention in the areas of signal processing, computer vision and pattern recognition. In this paper, a new method based on the CS theory is presented for robust facial expression recognition. The CS theory is used to construct a sparse representation classifier (SRC). The effectiveness and robustness of the SRC method is investigated on clean and occluded facial expression images. Three typical facial features, i.e., the raw pixels, Gabor wavelets representation and local binary patterns (LBP), are extracted to evaluate the performance of the SRC method. Compared with the nearest neighbor (NN), linear support vector machines (SVM) and the nearest subspace (NS), experimental results on the popular Cohn-Kanade facial expression database demonstrate that the SRC method obtains better performance and stronger robustness to corruption and occlusion on robust facial expression recognition tasks.

  20. Interaction between facial expression and color

    OpenAIRE

    Kae Nakajima; Tetsuto Minami; Shigeki Nakauchi

    2017-01-01

    Facial color varies depending on emotional state, and emotions are often described in relation to facial color. In this study, we investigated whether the recognition of facial expressions was affected by facial color and vice versa. In the facial expression task, expression morph continua were employed: fear-anger and sadness-happiness. The morphed faces were presented in three different facial colors (bluish, neutral, and reddish color). Participants identified a facial expression between t...

  1. Spontaneous Facial Mimicry in Response to Dynamic Facial Expressions

    Science.gov (United States)

    Sato, Wataru; Yoshikawa, Sakiko

    2007-01-01

    Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing…

  2. Alexithymia and the processing of emotional facial expressions (EFEs): systematic review, unanswered questions and further perspectives.

    Science.gov (United States)

    Grynberg, Delphine; Chang, Betty; Corneille, Olivier; Maurage, Pierre; Vermeulen, Nicolas; Berthoz, Sylvie; Luminet, Olivier

    2012-01-01

    Alexithymia is characterized by difficulties in identifying, differentiating and describing feelings. A high prevalence of alexithymia has often been observed in clinical disorders characterized by low social functioning. This review aims to assess the association between alexithymia and the ability to decode emotional facial expressions (EFEs) within clinical and healthy populations. More precisely, this review has four main objectives: (1) to assess if alexithymia is a better predictor of the ability to decode EFEs than the diagnosis of clinical disorder; (2) to assess the influence of comorbid factors (depression and anxiety disorder) on the ability to decode EFE; (3) to investigate if deficits in decoding EFEs are specific to some levels of processing or task types; (4) to investigate if the deficits are specific to particular EFEs. Twenty four studies (behavioural and neuroimaging) were identified through a computerized literature search of Psycinfo, PubMed, and Web of Science databases from 1990 to 2010. Data on methodology, clinical characteristics, and possible confounds were analyzed. The review revealed that: (1) alexithymia is associated with deficits in labelling EFEs among clinical disorders, (2) the level of depression and anxiety partially account for the decoding deficits, (3) alexithymia is associated with reduced perceptual abilities, and is likely to be associated with impaired semantic representations of emotional concepts, and (4) alexithymia is associated with neither specific EFEs nor a specific valence. These studies are discussed with respect to processes involved in the recognition of EFEs. Future directions for research on emotion perception are also discussed.

  3. Alexithymia and the processing of emotional facial expressions (EFEs: systematic review, unanswered questions and further perspectives.

    Directory of Open Access Journals (Sweden)

    Delphine Grynberg

    Full Text Available Alexithymia is characterized by difficulties in identifying, differentiating and describing feelings. A high prevalence of alexithymia has often been observed in clinical disorders characterized by low social functioning. This review aims to assess the association between alexithymia and the ability to decode emotional facial expressions (EFEs within clinical and healthy populations. More precisely, this review has four main objectives: (1 to assess if alexithymia is a better predictor of the ability to decode EFEs than the diagnosis of clinical disorder; (2 to assess the influence of comorbid factors (depression and anxiety disorder on the ability to decode EFE; (3 to investigate if deficits in decoding EFEs are specific to some levels of processing or task types; (4 to investigate if the deficits are specific to particular EFEs. Twenty four studies (behavioural and neuroimaging were identified through a computerized literature search of Psycinfo, PubMed, and Web of Science databases from 1990 to 2010. Data on methodology, clinical characteristics, and possible confounds were analyzed. The review revealed that: (1 alexithymia is associated with deficits in labelling EFEs among clinical disorders, (2 the level of depression and anxiety partially account for the decoding deficits, (3 alexithymia is associated with reduced perceptual abilities, and is likely to be associated with impaired semantic representations of emotional concepts, and (4 alexithymia is associated with neither specific EFEs nor a specific valence. These studies are discussed with respect to processes involved in the recognition of EFEs. Future directions for research on emotion perception are also discussed.

  4. The Facial Expressive Action Stimulus Test. A test battery for the assessment of face memory, face and object perception, configuration processing, and facial expression recognition.

    Science.gov (United States)

    de Gelder, Beatrice; Huis In 't Veld, Elisabeth M J; Van den Stock, Jan

    2015-01-01

    There are many ways to assess face perception skills. In this study, we describe a novel task battery FEAST (Facial Expressive Action Stimulus Test) developed to test recognition of identity and expressions of human faces as well as stimulus control categories. The FEAST consists of a neutral and emotional face memory task, a face and shoe identity matching task, a face and house part-to-whole matching task, and a human and animal facial expression matching task. The identity and part-to-whole matching tasks contain both upright and inverted conditions. The results provide reference data of a healthy sample of controls in two age groups for future users of the FEAST.

  5. Subliminal and Supraliminal Processing of Facial Expression of Emotions: Brain Oscillation in the Left/Right Frontal Area

    Directory of Open Access Journals (Sweden)

    Michela Balconi

    2012-03-01

    Full Text Available The unconscious effects of an emotional stimulus have been highlighted by a vast amount of research, whereover it remains questionable whether it is possible to assign a specific function to cortical brain oscillations in the unconscious perception of facial expressions of emotions. Alpha band variation was monitored within the right- and left-cortical side when subjects consciously (supraliminal stimulation or unconsciously (subliminal stimulation processed facial patterns. Twenty subjects looked at six facial expressions of emotions (anger, fear, surprise, disgust, happiness, sadness, and neutral under two different conditions: supraliminal (200 ms vs. subliminal (30 ms stimulation (140 target-mask pairs for each condition. The results showed that conscious/unconscious processing and the significance of the stimulus can modulate the alpha power. Moreover, it was found that there was an increased right frontal activity for negative emotions vs. an increased left response for positive emotion. The significance of facial expressions was adduced to elucidate cortical different responses to emotional types.

  6. iFace: Facial Expression Training System

    OpenAIRE

    Ito, Kyoko; Kurose, Hiroyuki; Takami, Ai; Nishida, Shogo

    2008-01-01

    In this study, a target facial expression selection interface for a facial expression training system and a facial expression training system were both proposed and developed. Twelve female dentists used the facial expression training system, and evaluations and opinions about the facial expression training system were obtained from these participants. In the future, we will attempt to improve both the target facial expression selection interface and the comparison of a current and a target f...

  7. Disconnection mechanism and regional cortical atrophy contribute to impaired processing of facial expressions and theory of mind in multiple sclerosis

    DEFF Research Database (Denmark)

    Mike, Andrea; Strammer, Erzsebet; Aradi, Mihaly

    2013-01-01

    healthy controls. We assessed overall brain cortical thickness in patients with multiple sclerosis and the scanned healthy controls, and measured the total and regional T1 and T2 white matter lesion volumes in patients with multiple sclerosis. Performances in tests of recognition of mental states...... and emotions from facial expressions and eye gazes correlated with both total T1-lesion load and regional T1-lesion load of association fiber tracts interconnecting cortical regions related to visual and emotion processing (genu and splenium of corpus callosum, right inferior longitudinal fasciculus, right...... inferior fronto-occipital fasciculus, uncinate fasciculus). Both of these tests showed correlations with specific cortical areas involved in emotion recognition from facial expressions (right and left fusiform face area, frontal eye filed), processing of emotions (right entorhinal cortex) and socially...

  8. Spontaneous Emotional Facial Expression Detection

    Directory of Open Access Journals (Sweden)

    Zhihong Zeng

    2006-08-01

    Full Text Available Change in a speaker’s emotion is a fundamental component in human communication. Automatic recognition of spontaneous emotion would significantly impact human-computer interaction and emotion-related studies in education, psychology and psychiatry. In this paper, we explore methods for detecting emotional facial expressions occurring in a realistic human conversation setting—the Adult Attachment Interview (AAI. Because non-emotional facial expressions have no distinct description and are expensive to model, we treat emotional facial expression detection as a one- class classification problem, which is to describe target objects (i.e., emotional facial expressions and distinguish them from outliers (i.e., non-emotional ones. Our preliminary experiments on AAI data suggest that one-class classification methods can reach a good balance between cost (labeling and computing and recognition performance by avoiding non-emotional expression labeling and modeling.

  9. The MPI facial expression database--a validated database of emotional and conversational facial expressions.

    Directory of Open Access Journals (Sweden)

    Kathrin Kaulard

    Full Text Available The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision to investigate the processing of a wider range of natural

  10. Holistic gaze strategy to categorize facial expression of varying intensities.

    Directory of Open Access Journals (Sweden)

    Kun Guo

    Full Text Available Using faces representing exaggerated emotional expressions, recent behaviour and eye-tracking studies have suggested a dominant role of individual facial features in transmitting diagnostic cues for decoding facial expressions. Considering that in everyday life we frequently view low-intensity expressive faces in which local facial cues are more ambiguous, we probably need to combine expressive cues from more than one facial feature to reliably decode naturalistic facial affects. In this study we applied a morphing technique to systematically vary intensities of six basic facial expressions of emotion, and employed a self-paced expression categorization task to measure participants' categorization performance and associated gaze patterns. The analysis of pooled data from all expressions showed that increasing expression intensity would improve categorization accuracy, shorten reaction time and reduce number of fixations directed at faces. The proportion of fixations and viewing time directed at internal facial features (eyes, nose and mouth region, however, was not affected by varying levels of intensity. Further comparison between individual facial expressions revealed that although proportional gaze allocation at individual facial features was quantitatively modulated by the viewed expressions, the overall gaze distribution in face viewing was qualitatively similar across different facial expressions and different intensities. It seems that we adopt a holistic viewing strategy to extract expressive cues from all internal facial features in processing of naturalistic facial expressions.

  11. Cortical control of facial expression.

    Science.gov (United States)

    Müri, René M

    2016-06-01

    The present Review deals with the motor control of facial expressions in humans. Facial expressions are a central part of human communication. Emotional face expressions have a crucial role in human nonverbal behavior, allowing a rapid transfer of information between individuals. Facial expressions can be either voluntarily or emotionally controlled. Recent studies in nonhuman primates and humans have revealed that the motor control of facial expressions has a distributed neural representation. At least five cortical regions on the medial and lateral aspects of each hemisphere are involved: the primary motor cortex, the ventral lateral premotor cortex, the supplementary motor area on the medial wall, and the rostral and caudal cingulate cortex. The results of studies in humans and nonhuman primates suggest that the innervation of the face is bilaterally controlled for the upper part and mainly contralaterally controlled for the lower part. Furthermore, the primary motor cortex, the ventral lateral premotor cortex, and the supplementary motor area are essential for the voluntary control of facial expressions. In contrast, the cingulate cortical areas are important for emotional expression, because they receive input from different structures of the limbic system.

  12. Facial Expression at Retrieval Affects Recognition of Facial Identity

    Directory of Open Access Journals (Sweden)

    Wenfeng eChen

    2015-06-01

    Full Text Available It is well known that memory can be modulated by emotional stimuli at the time of encoding and consolidation. For example, happy faces create better identity recognition than faces with certain other expressions. However, the influence of facial expression at the time of retrieval remains unknown in the literature. To separate the potential influence of expression at retrieval from its effects at earlier stages, we had participants learn neutral faces but manipulated facial expression at the time of memory retrieval in a standard old/new recognition task. The results showed a clear effect of facial expression, where happy test faces were identified more successfully than angry test faces. This effect is unlikely due to greater image similarity between the neutral learning face and the happy test face, because image analysis showed that the happy test faces are in fact less similar to the neutral learning faces relative to the angry test faces. In the second experiment, we investigated whether this emotional effect is influenced by the expression at the time of learning. We employed angry or happy faces as learning stimuli, and angry, happy, and neutral faces as test stimuli. The results showed that the emotional effect at retrieval is robust across different encoding conditions with happy or angry expressions. These findings indicate that emotional expressions affect the retrieval process in identity recognition, and identity recognition does not rely on emotional association between learning and test faces.

  13. Compound facial expressions of emotion.

    Science.gov (United States)

    Du, Shichuan; Tao, Yong; Martinez, Aleix M

    2014-04-15

    Understanding the different categories of facial expressions of emotion regularly used by us is essential to gain insights into human cognition and affect as well as for the design of computational models and perceptual interfaces. Past research on facial expressions of emotion has focused on the study of six basic categories--happiness, surprise, anger, sadness, fear, and disgust. However, many more facial expressions of emotion exist and are used regularly by humans. This paper describes an important group of expressions, which we call compound emotion categories. Compound emotions are those that can be constructed by combining basic component categories to create new ones. For instance, happily surprised and angrily surprised are two distinct compound emotion categories. The present work defines 21 distinct emotion categories. Sample images of their facial expressions were collected from 230 human subjects. A Facial Action Coding System analysis shows the production of these 21 categories is different but consistent with the subordinate categories they represent (e.g., a happily surprised expression combines muscle movements observed in happiness and surprised). We show that these differences are sufficient to distinguish between the 21 defined categories. We then use a computational model of face perception to demonstrate that most of these categories are also visually discriminable from one another.

  14. Darwin, deception, and facial expression.

    Science.gov (United States)

    Ekman, Paul

    2003-12-01

    Darwin did not focus on deception. Only a few sentences in his book mentioned the issue. One of them raised the very interesting question of whether it is difficult to voluntarily inhibit the emotional expressions that are most difficult to voluntarily fabricate. Another suggestion was that it would be possible to unmask a fabricated expression by the absence of the difficult-to-voluntarily-generate facial actions. Still another was that during emotion body movements could be more easily suppressed than facial expression. Research relevant to each of Darwin's suggestions is reviewed, as is other research on deception that Darwin did not foresee.

  15. Simultaneous facial feature tracking and facial expression recognition.

    Science.gov (United States)

    Li, Yongqiang; Wang, Shangfei; Zhao, Yongping; Ji, Qiang

    2013-07-01

    The tracking and recognition of facial activities from images or videos have attracted great attention in computer vision field. Facial activities are characterized by three levels. First, in the bottom level, facial feature points around each facial component, i.e., eyebrow, mouth, etc., capture the detailed face shape information. Second, in the middle level, facial action units, defined in the facial action coding system, represent the contraction of a specific set of facial muscles, i.e., lid tightener, eyebrow raiser, etc. Finally, in the top level, six prototypical facial expressions represent the global facial muscle movement and are commonly used to describe the human emotion states. In contrast to the mainstream approaches, which usually only focus on one or two levels of facial activities, and track (or recognize) them separately, this paper introduces a unified probabilistic framework based on the dynamic Bayesian network to simultaneously and coherently represent the facial evolvement in different levels, their interactions and their observations. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, all three levels of facial activities are simultaneously recognized through a probabilistic inference. Extensive experiments are performed to illustrate the feasibility and effectiveness of the proposed model on all three level facial activities.

  16. Judgment of facial expressions and depression persistence

    NARCIS (Netherlands)

    Hale, WW

    1998-01-01

    In research it has been demonstrated that cognitive and interpersonal processes play significant roles in depression development and persistence. The judgment of emotions displayed in facial expressions by depressed patients allows for a better understanding of these processes. In this study, 48 maj

  17. Altering sensorimotor feedback disrupts visual discrimination of facial expressions.

    Science.gov (United States)

    Wood, Adrienne; Lupyan, Gary; Sherrin, Steven; Niedenthal, Paula

    2016-08-01

    Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.

  18. Facial Expression Synthesis Based on Imitation

    OpenAIRE

    Yihjia Tsai; Hwei Jen Lin; Fu Wen Yang

    2012-01-01

    It is an interesting and challenging problem to synthesise vivid facial expression images. In this paper, we propose a facial expression synthesis system which imitates a reference facial expression image according to the difference between shape feature vectors of the neutral image and expression image. To improve the result, two stages of postprocessing are involved. We focus on the facial expressions of happiness, sadness, and surprise. Experimental results show vivid and flexible results.

  19. Exploiting facial expressions for affective video summarisation

    NARCIS (Netherlands)

    Joho, H.; Jose, J.M.; Valenti, R.; Sebe, N.; Marchand-Maillet, S.; Kompatsiaris, I.

    2009-01-01

    This paper presents an approach to affective video summarisation based on the facial expressions (FX) of viewers. A facial expression recognition system was deployed to capture a viewer's face and his/her expressions. The user's facial expressions were analysed to infer personalised affective scenes

  20. Facial Expression Recognition Using SVM Classifier

    OpenAIRE

    2015-01-01

    Facial feature tracking and facial actions recognition from image sequence attracted great attention in computer vision field. Computational facial expression analysis is a challenging research topic in computer vision. It is required by many applications such as human-computer interaction, computer graphic animation and automatic facial expression recognition. In recent years, plenty of computer vision techniques have been developed to track or recognize the facial activities in three levels...

  1. Mapping and Manipulating Facial Expression

    Science.gov (United States)

    Theobald, Barry-John; Matthews, Iain; Mangini, Michael; Spies, Jeffrey R.; Brick, Timothy R.; Cohn, Jeffrey F.; Boker, Steven M.

    2009-01-01

    Nonverbal visual cues accompany speech to supplement the meaning of spoken words, signify emotional state, indicate position in discourse, and provide back-channel feedback. This visual information includes head movements, facial expressions and body gestures. In this article we describe techniques for manipulating both verbal and nonverbal facial…

  2. Automatic Facial Expression Analysis A Survey

    Directory of Open Access Journals (Sweden)

    C.P. Sumathi

    2013-01-01

    Full Text Available The Automatic Facial Expression Recognition has been one of the latest research topic since1990’s.There have been recent advances in detecting face, facial expression recognition andclassification. There are multiple methods devised for facial feature extraction which helps in identifyingface and facial expressions. This paper surveys some of the published work since 2003 till date. Variousmethods are analysed to identify the Facial expression. The Paper also discusses about the facialparameterization using Facial Action Coding System(FACS action units and the methods whichrecognizes the action units parameters using facial expression data that are extracted. Various kinds offacial expressions are present in human face which can be identified based on their geometric features,appearance features and hybrid features . The two basic concepts of extracting features are based onfacial deformation and facial motion. This article also identifies the techniques based on thecharacteristics of expressions and classifies the suitable methods that can be implemented.

  3. Misrecognition of facial expressions in delinquents

    Directory of Open Access Journals (Sweden)

    Matsuura Naomi

    2009-09-01

    Full Text Available Abstract Background Previous reports have suggested impairment in facial expression recognition in delinquents, but controversy remains with respect to how such recognition is impaired. To address this issue, we investigated facial expression recognition in delinquents in detail. Methods We tested 24 male adolescent/young adult delinquents incarcerated in correctional facilities. We compared their performances with those of 24 age- and gender-matched control participants. Using standard photographs of facial expressions illustrating six basic emotions, participants matched each emotional facial expression with an appropriate verbal label. Results Delinquents were less accurate in the recognition of facial expressions that conveyed disgust than were control participants. The delinquents misrecognized the facial expressions of disgust as anger more frequently than did controls. Conclusion These results suggest that one of the underpinnings of delinquency might be impaired recognition of emotional facial expressions, with a specific bias toward interpreting disgusted expressions as hostile angry expressions.

  4. Realistic facial animation generation based on facial expression mapping

    Science.gov (United States)

    Yu, Hui; Garrod, Oliver; Jack, Rachael; Schyns, Philippe

    2014-01-01

    Facial expressions reflect internal emotional states of a character or in response to social communications. Though much effort has been taken to generate realistic facial expressions, it still remains a challenging topic due to human being's sensitivity to subtle facial movements. In this paper, we present a method for facial animation generation, which reflects true facial muscle movements with high fidelity. An intermediate model space is introduced to transfer captured static AU peak frames based on FACS to the conformed target face. And then dynamic parameters derived using a psychophysics method is integrated to generate facial animation, which is assumed to represent natural correlation of multiple AUs. Finally, the animation sequence in the intermediate model space is mapped to the target face to produce final animation.

  5. Facial age affects emotional expression decoding

    Directory of Open Access Journals (Sweden)

    Mara eFölster

    2014-02-01

    Full Text Available Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions.Previous studies have often followed up this phenomenon by examining the effect of the observers’ age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and folds may render facial expressions of older adults harder to decode. In this paper, we review theoretical frameworks and empirical findings on age effects on decoding emotional expressions, with an emphasis on age-of-face effects. We conclude that the age of the face plays an important role for facial expression decoding. Lower expressivity, age-related changes in the face, less elaborated emotion schemas for older faces, negative attitudes toward older adults, and different visual scan patterns and neural processing of older than younger faces may lower decoding accuracy for older faces. Furthermore, age-related stereotypes and age-related changes in the face may bias the attribution of specific emotions such as sadness to older faces.

  6. Facial age affects emotional expression decoding.

    Science.gov (United States)

    Fölster, Mara; Hess, Ursula; Werheid, Katja

    2014-01-01

    Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions. Previous studies have often followed up this phenomenon by examining the effect of the observers' age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and folds may render facial expressions of older adults harder to decode. In this paper, we review theoretical frameworks and empirical findings on age effects on decoding emotional expressions, with an emphasis on age-of-face effects. We conclude that the age of the face plays an important role for facial expression decoding. Lower expressivity, age-related changes in the face, less elaborated emotion schemas for older faces, negative attitudes toward older adults, and different visual scan patterns and neural processing of older than younger faces may lower decoding accuracy for older faces. Furthermore, age-related stereotypes and age-related changes in the face may bias the attribution of specific emotions such as sadness to older faces.

  7. Neurocognitive processing of emotion facial expressions in individuals with self-reported depressive symptoms: the role of personality and anxiety.

    Science.gov (United States)

    Mardaga, S; Iakimova, G

    2014-11-01

    Neurocognition may constitute one of the numerous factors that mediate the reciprocal influences between personality and depression. The present study explored the influence of personality and anxiety traits on the neurocognitive processing of emotional faces and specifically focused on personal characteristics related to negative (harm avoidance - HA) and positive affectivity (self-directedness - SD) and to anxiety. Twenty participants with self-reported depressive symptoms and 18 control participants were selected based on their BDI-II scores. Personality (TCI-R), anxiety and attention were measured and event-related potentials (ERPs) were recorded during an implicit emotional face perception task (fear, sadness, happiness, neutrality). The participants who self-reported depressive symptoms had higher HA, lower SD and higher anxiety compared to controls. Controls showed enhanced P300 and LPP amplitudes for fear. Individuals with self-reported depression showed reduced ERPs amplitudes for happiness. HA did not account for the difference between the groups but high HA and high anxiety were positively correlated with enhanced P300 amplitude for fear in participants with depressive symptoms. In contrast, SD accounted for the difference between the groups but was not correlated to the ERP components' amplitudes recorded for facial expressions. Other personality dimensions (reward dependence, cooperativeness) influenced the ERPs recorded for facial emotions. Personality dimensions influence the neurocognitive processing of emotional faces in individuals with self-reported depressive symptoms, which may constitute a cognitive vulnerability to depression. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  8. 正负效价面部表情图片加工的差异%The Processing Difference on the Facial Expression Pictures with Different Valence

    Institute of Scientific and Technical Information of China (English)

    隋雪; 纪雅婷; 陈欣; 任桂琴

    2015-01-01

    为探讨正负效价面部表情图片识别的差异,采用独立呈现范式呈现正负效价面部表情图片,控制呈现时间和提示线索位置,并利用眼动仪记录识别过程中的眼动指标。结果发现:(1)在识别速度和正确率上,加工积极面部表情图片高于加工消极面部表情图片,显示出积极表情优势。(2)呈现时间没有改变正负效价表情图片加工之间的差异。(3)存在提示线索位置效应,即提示线索在嘴部有利于面部表情识别。(4)面部表情识别遵循“眼部-嘴部-眼部”的规律。结果提示不同性质面部表情的加工机制不同,加工深度作用小于线索作用。%The experiment adopted the independent present paradigm to show positive and negative valence facial ex-pression pictures to explore the differences between the recognition of positive and negative valence facial ex-pressions pictures. The presentation time and the position of cue-target were controlled, and the eye-tracker was used to record the indexes of eye movement in the identification process. Results were as follows: 1) On the recognition speed and accuracy rate part, positive facial expression pictures′ processing exceeded the processing of negative facial expressions pictures, which showed the positive expression advantage. 2) The presentation time did not change the difference between the processing of positive and negative valence facial expressions pictures. 3) There was a position effect of cues that cues around the mouth would contribute to the facial expression recognition. 4) The facial expression recognition followed the “eye-to-mouth-to-eye” principle. The conse-quence suggests that the processing mechanism of different facial expression is different; the effect is less in the depth of processing than the hint position.

  9. Neural processing of fearful and happy facial expressions during emotion-relevant and emotion-irrelevant tasks: A fixation-to-feature approach.

    Science.gov (United States)

    Neath-Tavares, Karly N; Itier, Roxane J

    2016-09-01

    Research suggests an important role of the eyes and mouth for discriminating facial expressions of emotion. A gaze-contingent procedure was used to test the impact of fixation to facial features on the neural response to fearful, happy and neutral facial expressions in an emotion discrimination (Exp.1) and an oddball detection (Exp.2) task. The N170 was the only eye-sensitive ERP component, and this sensitivity did not vary across facial expressions. In both tasks, compared to neutral faces, responses to happy expressions were seen as early as 100-120ms occipitally, while responses to fearful expressions started around 150ms, on or after the N170, at both occipital and lateral-posterior sites. Analyses of scalp topographies revealed different distributions of these two emotion effects across most of the epoch. Emotion processing interacted with fixation location at different times between tasks. Results suggest a role of both the eyes and mouth in the neural processing of fearful expressions and of the mouth in the processing of happy expressions, before 350ms.

  10. Man-machine collaboration using facial expressions

    Science.gov (United States)

    Dai, Ying; Katahera, S.; Cai, D.

    2002-09-01

    For realizing the flexible man-machine collaboration, understanding of facial expressions and gestures is not negligible. In our method, we proposed a hierarchical recognition approach, for the understanding of human emotions. According to this method, the facial AFs (action features) were firstly extracted and recognized by using histograms of optical flow. Then, based on the facial AFs, facial expressions were classified into two calsses, one of which presents the positive emotions, and the other of which does the negative ones. Accordingly, the facial expressions belonged to the positive class, or the ones belonged to the negative class, were classified into more complex emotions, which were revealed by the corresponding facial expressions. Finally, the system architecture how to coordinate in recognizing facil action features and facial expressions for man-machine collaboration was proposed.

  11. Enhanced subliminal emotional responses to dynamic facial expressions

    Directory of Open Access Journals (Sweden)

    Wataru eSato

    2014-09-01

    Full Text Available Emotional processing without conscious awareness plays an important role in human social interaction. Several behavioral studies reported that subliminal presentation of photographs of emotional facial expressions induces unconscious emotional processing. However, it was difficult to elicit strong and robust effects using this method. We hypothesized that dynamic presentations of facial expressions would enhance subliminal emotional effects and tested this hypothesis with two experiments. Fearful or happy facial expressions were presented dynamically or statically in either the left or the right visual field for 20 (Experiment 1 and 30 (Experiment 2 ms. Nonsense target ideographs were then presented, and participants reported their preference for them. The results consistently showed that dynamic presentations of emotional facial expressions induced more evident emotional biases toward subsequent targets than did static ones. These results indicate that dynamic presentations of emotional facial expressions induce more evident unconscious emotional processing.

  12. Recognizing Action Units for Facial Expression Analysis.

    Science.gov (United States)

    Tian, Ying-Li; Kanade, Takeo; Cohn, Jeffrey F

    2001-02-01

    Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams.

  13. Facial expression recognition using thermal image.

    Science.gov (United States)

    Jiang, Guotai; Song, Xuemin; Zheng, Fuhui; Wang, Peipei; Omer, Ashgan

    2005-01-01

    Facial expression recognition will be studied in this paper using mathematics morphology, through drawing and analyzing the whole geometry characteristics and some geometry characteristics of the interesting area of Infrared Thermal Imaging (IRTI). The results show that geometry characteristic in the interesting region of different expression are obviously different; Facial temperature changes almost with the expression at the same time. Studies have shown feasibility of facial expression recognition on the basis of IRTI. This method can be used to monitor the facial expression in real time, which can be used in auxiliary diagnosis and medical on disease.

  14. Social Use of Facial Expressions in Hylobatids.

    Directory of Open Access Journals (Sweden)

    Linda Scheider

    Full Text Available Non-human primates use various communicative means in interactions with others. While primate gestures are commonly considered to be intentionally and flexibly used signals, facial expressions are often referred to as inflexible, automatic expressions of affective internal states. To explore whether and how non-human primates use facial expressions in specific communicative interactions, we studied five species of small apes (gibbons by employing a newly established Facial Action Coding System for hylobatid species (GibbonFACS. We found that, despite individuals often being in close proximity to each other, in social (as opposed to non-social contexts the duration of facial expressions was significantly longer when gibbons were facing another individual compared to non-facing situations. Social contexts included grooming, agonistic interactions and play, whereas non-social contexts included resting and self-grooming. Additionally, gibbons used facial expressions while facing another individual more often in social contexts than non-social contexts where facial expressions were produced regardless of the attentional state of the partner. Also, facial expressions were more likely 'responded to' by the partner's facial expressions when facing another individual than non-facing. Taken together, our results indicate that gibbons use their facial expressions differentially depending on the social context and are able to use them in a directed way in communicative interactions with other conspecifics.

  15. Positive and negative facial emotional expressions: the effect on infants' and children's facial identity recognition

    OpenAIRE

    Brenna,

    2013-01-01

    Aim of the present study was to investigate the origin and the development of the interdipendence between identity recognition and facial emotional expression processing, suggested by recent models on face processing (Calder & Young, 2005) and supported by outcomes on adults (e.g. Baudouin, Gilibert, Sansone, & Tiberghien, 2000; Schweinberger & Soukup, 1998). Particularly the effect of facial emotional expressions on infants’ and children’s ability to recognize identity of a face was explored...

  16. Automatic recognition of emotions from facial expressions

    Science.gov (United States)

    Xue, Henry; Gertner, Izidor

    2014-06-01

    In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).

  17. Neural Mechanism of Facial Expression Perception in Intellectually Gifted Adolescents

    DEFF Research Database (Denmark)

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan

    2015-01-01

    The current study investigated the relationship between general intelligence and the three stages of facial expression processing. Two groups of adolescents with different levels of general intelligence were required to identify three types of facial expressions (happy, sad, and neutral faces...... average IQ adolescents during the facial expression identification task. The electrophysiological responses showed that no significant IQ-related differences were found for P1 responses during the early visual processing stage. During the middle processing stage, high IQ adolescents had faster structural...... attentional modulation, with larger late positive potential (LPP) amplitudes compared to adolescents with average IQ. The current study revealed that adolescents with different intellectual levels used different neural dynamic processes during these three stages in the processing of facial expressions....

  18. Rapid influence of emotional scenes on encoding of facial expressions: An ERP study

    National Research Council Canada - National Science Library

    Righart, R. G. R; de Gelder, B

    2008-01-01

    ... to discriminate between facial expressions. The aim of the present study was to investigate how the early stages of face processing are affected by emotional scenes when explicit categorizations of fearful and happy facial expressions are made...

  19. Mutual information-based facial expression recognition

    Science.gov (United States)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  20. Concurrent development of facial identity and expression discrimination

    OpenAIRE

    Dalrymple, Kirsten A.; Visconti di Oleggio Castello, Matteo; Elison, Jed T.; Gobbini, M. Ida

    2017-01-01

    Facial identity and facial expression processing both appear to follow a protracted developmental trajectory, yet these trajectories have been studied independently and have not been directly compared. Here we investigated whether these processes develop at the same or different rates using matched identity and expression discrimination tasks. The Identity task begins with a target face that is a morph between two identities (Identity A/Identity B). After a brief delay, the target face is rep...

  1. Reduced facial expressiveness in Parkinson's disease: A pure motor disorder?

    Science.gov (United States)

    Ricciardi, Lucia; Bologna, Matteo; Morgante, Francesca; Ricciardi, Diego; Morabito, Bruno; Volpe, Daniele; Martino, Davide; Tessitore, Alessandro; Pomponi, Massimiliano; Bentivoglio, Anna Rita; Bernabei, Roberto; Fasano, Alfonso

    2015-11-15

    Impaired emotional facial expressiveness is an important feature in Parkinson's disease (PD). Although there is evidence of a possible relationship between reduced facial expressiveness and altered emotion recognition or imagery in PD, it is unknown whether other aspects of the emotional processing, such as subjective emotional experience (alexithymia), might influence hypomimia in this condition. In this study wee aimed to investigate possible relationship between reduced facial expressiveness and altered emotion processing (including facial recognition and alexithymia) in patients with PD. Forty PD patients and seventeen healthy controls were evaluated. Facial expressiveness was rated on video recordings, according to the UPDRS-III item 19 and using an ad hoc scale assessing static and dynamic facial expression and posed emotions. Six blind raters evaluated the patients' videos. Emotion facial recognition was tested using the Ekman Test; alexithymia was assessed using Toronto Alexithymia Scale (TAS-20). PD patients had a significantly reduced static and dynamic facial expressiveness and a deficit in posing happiness and surprise. They performed significantly worse than healthy controls in recognizing surprise (p=0.03). The Ekman total score positively correlated with the global expressiveness (R^2=0.39, p=0.01) and with the expressiveness of disgust (R^2=0.32, p=0.01). The occurrence of alexithymia was not different between PD patients and HC; however, a significant negative correlation between the expressiveness of disgust was found for a subscore of TAS (R^2=-.447, p=0.007). Reduced facial expressiveness in PD may be in part related to difficulties with emotional recognition in a context of an unimpaired subjective emotional experience. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Facial expressions recognition with an emotion expressive robotic head

    Science.gov (United States)

    Doroftei, I.; Adascalitei, F.; Lefeber, D.; Vanderborght, B.; Doroftei, I. A.

    2016-08-01

    The purpose of this study is to present the preliminary steps in facial expressions recognition with a new version of an expressive social robotic head. So, in a first phase, our main goal was to reach a minimum level of emotional expressiveness in order to obtain nonverbal communication between the robot and human by building six basic facial expressions. To evaluate the facial expressions, the robot was used in some preliminary user studies, among children and adults.

  3. Emotional facial expression processing in depression: data from behavioral and event-related potential studies.

    Science.gov (United States)

    Delle-Vigne, D; Wang, W; Kornreich, C; Verbanck, P; Campanella, S

    2014-04-01

    Behavioral literature investigating emotional processes in depressive populations (i.e., unipolar and bipolar depression) states that, compared to healthy controls, depressive subjects exhibit disrupted emotional processing, indexed by lower performance and/or delayed response latencies. The development of brain imaging techniques, such as functional magnetic resonance imaging (fMRI), provided the possibility to visualize the brain regions engaged in emotional processes and how they fail to interact in psychiatric diseases. However, fMRI suffers from poor temporal resolution and cognitive function involves various steps and cognitive stages (serially or in parallel) to give rise to a normal performance. Thus, the origin of a behavioral deficit may result from the alteration of a cognitive stage differently situated along the information-processing stream, outlining the importance of access to this dynamic "temporal" information. In this paper, we will illustrate, through depression, the role that should be attributed to cognitive event-related potentials (ERPs). Indeed, owing to their optimal temporal resolution, ERPs can monitor the neural processes engaged in disrupted cognitive function and provide crucial information for its treatment, training of the impaired cognitive functions and guidelines for clinicians in the choice and monitoring of appropriate medication for the patient. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  4. Expression and communication of doubt/uncertainty through facial expression

    Directory of Open Access Journals (Sweden)

    Pio E. Ricci Bitti

    2014-04-01

    Full Text Available There is a wide debate on the mental state of doubt/uncertainty; one wonders whether it is a predominantly cognitive or emotional state of mind and whether typical facial expressions communicate doubt/uncertainty. To this purpose,through a role playing procedure, a large sample of expressions were collected and afterwards evaluated through a combination of encoding and decoding procedures,including also FACS (Facial Action Coding System analysis. The results have partially confirmed our hypothesis, identifying two typical facial expressions of doubt/uncertainty, which share the same facial actions in the inferior part of the face and show differential facial actions in the upper face.

  5. 场景的不同空间频率信息对面部表情加工的影响:来自ERP的证据%Scenes Differing in Spatial Frequencies Affect Facial Expression Processing: Evidence from ERP Scenes Differing in Spatial Frequencies Affect Facial Expression Processing: Evidence from ERP

    Institute of Scientific and Technical Information of China (English)

    杨亚平; 徐强; 张林; 邓培状; 梁宁建

    2015-01-01

    Facial expressions are fundamental emotional stimuli as they convey important information in social interaction. Most empirical research on facial expression processing has focused on isolated faces. But in everyday life, faces are embedded in surrounding context. For example, fearful faces always accompany with tight bodies, and happy faces appear in birthday parties more often than in sickrooms. Scenes which faces are embedded in provide typical visual context. Recently some studies attempted to investigate the influence of emotional scenes on facial expression processing. Although a few previous studies in this field demonstrated the scene effects of facial expression processing existed, the studies did not further explore the specific processing mechanism of the scene effects. Because of its excellent temporal resolution, the present study used event-related potentials (ERPs) to investigate the effects of scenes that contain different spatial frequencies on facial expression processing. Our hypothesis was that the different spatial frequencies of scenes affected facial expression processing in different ways. Eighteen right-handed college students (11 females; age range 17~24 years; mean age_20.67±1.91 years) were paid to participate in the experiment. Thirty-two face pictures (16 females and 16 males) with fearful and neutral expressions and thirty-two scene pictures (16 negative scenes and 16 neutral scenes) were presented. Spatial frequency content in the original scene stimuli (broad-band, BSF) was filtered using a high-pass cut-off that was > 16cpi for the higher spatial frequency (HSF) scene stimuli, and a low-pass cut-off of 0.1). Our ERP results showed that for scenes with broad-band spatial frequency, fearful faces which appeared in neutral scenes elicited larger N170 amplitudes than these faces which appeared in negative scenes in both right and left hemispheres. But the effects were not found for scenes with high and low spatial frequencies. In

  6. Facial Expression Spacial Charts for Describing Dynamic Diversity of Facial Expressions

    Directory of Open Access Journals (Sweden)

    H. Madokoro

    2012-08-01

    Full Text Available This paper presents a new framework to describe individual facial expression spaces, particularly addressing the dynamic diversity of facial expressions that appear as an exclamation or emotion, to create a unique space for each person. We name this framework Facial Expression Spatial Charts (FESCs. The FESCs are created using Self– Organizing Maps (SOMs and Fuzzy Adaptive Resonance Theory (ART of unsupervised neural networks. For facial images with emphasized sparse representations using Gabor wavelet filters, SOMs extract topological information in facial expression images and classify them as categories in the fixed space that are decided by the number of units on the mapping layer. Subsequently, Fuzzy ART integrates categories classified by SOMs using adaptive learning functions under fixed granularity that is controlled by the vigilance parameter. The categories integrated by Fuzzy ART are matched to Expression Levels (ELs for quantifying facial expression intensity based on the arrangement of facial expressions on Russell’s circumplex model. We designate the category that contains neutral facial expression as the basis category. Actually, FESCs can visualize and represent dynamic diversity of facial expressions consisting of ELs extracted from facial expressions. In the experiment, we created an original facial expression dataset consisting of three facial expressions—happiness, anger, and sadness— obtained from 10 subjects during 7–20 weeks at one-week intervals. Results show that the method can adequately display the dynamic diversity of facial expressions between subjects, in addition to temporal changes in each subject. Moreover, we used stress measurement sheets to obtain temporal changes of stress for analyzing psychological effects of the stress that subjects feel. We estimated stress levels of four grades using Support Vector Machines (SVMs. The mean estimation rates for all 10 subjects and for 5 subjects over more than

  7. Discrimination of gender using facial image with expression change

    Science.gov (United States)

    Kuniyada, Jun; Fukuda, Takahiro; Terada, Kenji

    2005-12-01

    By carrying out marketing research, the managers of large-sized department stores or small convenience stores obtain the information such as ratio of men and women of visitors and an age group, and improve their management plan. However, these works are carried out in the manual operations, and it becomes a big burden to small stores. In this paper, the authors propose a method of men and women discrimination by extracting difference of the facial expression change from color facial images. Now, there are a lot of methods of the automatic recognition of the individual using a motion facial image or a still facial image in the field of image processing. However, it is very difficult to discriminate gender under the influence of the hairstyle and clothes, etc. Therefore, we propose the method which is not affected by personality such as size and position of facial parts by paying attention to a change of an expression. In this method, it is necessary to obtain two facial images with an expression and an expressionless. First, a region of facial surface and the regions of facial parts such as eyes, nose, and mouth are extracted in the facial image with color information of hue and saturation in HSV color system and emphasized edge information. Next, the features are extracted by calculating the rate of the change of each facial part generated by an expression change. In the last step, the values of those features are compared between the input data and the database, and the gender is discriminated. In this paper, it experimented for the laughing expression and smile expression, and good results were provided for discriminating gender.

  8. Facial expression of emotions in borderline personality disorder and depression.

    Science.gov (United States)

    Renneberg, Babette; Heyn, Katrin; Gebhard, Rita; Bachmann, Silke

    2005-09-01

    Borderline personality disorder (BPD) is characterized by marked problems in interpersonal relationships and emotion regulation. The assumption of emotional hyper-reactivity in BPD is tested regarding the facial expression of emotions, an aspect highly relevant for communication processes and a central feature of emotion regulation. Facial expressions of emotions are examined in a group of 30 female inpatients with BPD, 27 women with major depression and 30 non-patient female controls. Participants were videotaped while watching two short movie sequences, inducing either positive or negative emotions. Frequency of emotional facial expressions and intensity of happiness expressions were examined, using the Emotional Facial Action Coding System (EMFACS-7, Friesen & Ekman, EMFACS-7: Emotional Facial Action Coding System, Version 7. Unpublished manual, 1984). Group differences were analyzed for the negative and the positive mood-induction procedure separately. Results indicate that BPD patients reacted similar to depressed patients with reduced facial expressiveness to both films. The highest emotional facial activity to both films and most intense happiness expressions were displayed by the non-clinical control group. Current findings contradict the assumption of a general hyper-reactivity to emotional stimuli in patients with BPD.

  9. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia

    OpenAIRE

    Roberta eDaini; Chiara Maddalena Comparetti; Paola eRicciardelli

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently fro...

  10. Wavelet based approach for facial expression recognition

    Directory of Open Access Journals (Sweden)

    Zaenal Abidin

    2015-03-01

    Full Text Available Facial expression recognition is one of the most active fields of research. Many facial expression recognition methods have been developed and implemented. Neural networks (NNs have capability to undertake such pattern recognition tasks. The key factor of the use of NN is based on its characteristics. It is capable in conducting learning and generalizing, non-linear mapping, and parallel computation. Backpropagation neural networks (BPNNs are the approach methods that mostly used. In this study, BPNNs were used as classifier to categorize facial expression images into seven-class of expressions which are anger, disgust, fear, happiness, sadness, neutral and surprise. For the purpose of feature extraction tasks, three discrete wavelet transforms were used to decompose images, namely Haar wavelet, Daubechies (4 wavelet and Coiflet (1 wavelet. To analyze the proposed method, a facial expression recognition system was built. The proposed method was tested on static images from JAFFE database.

  11. Recognition of Face and Emotional Facial Expressions in Autism

    Directory of Open Access Journals (Sweden)

    Muhammed Tayyib Kadak

    2013-03-01

    Full Text Available Autism is a genetically transferred neurodevelopmental disorder characterized by severe and permanent deficits in many interpersonal relation areas like communication, social interaction and emotional responsiveness. Patients with autism have deficits in face recognition, eye contact and recognition of emotional expression. Both recognition of face and expression of facial emotion carried on face processing. Structural and functional impairment in fusiform gyrus, amygdala, superior temporal sulcus and other brain regions lead to deficits in recognition of face and facial emotion. Therefore studies suggest that face processing deficits resulted in problems in areas of social interaction and emotion in autism. Studies revealed that children with autism had problems in recognition of facial expression and used mouth region more than eye region. It was also shown that autistic patients interpreted ambiguous expressions as negative emotion. In autism, deficits related in various stages of face processing like detection of gaze, face identity, recognition of emotional expression were determined, so far. Social interaction impairments in autistic spectrum disorders originated from face processing deficits during the periods of infancy, childhood and adolescence. Recognition of face and expression of facial emotion could be affected either automatically by orienting towards faces after birth, or by “learning” processes in developmental periods such as identity and emotion processing. This article aimed to review neurobiological basis of face processing and recognition of emotional facial expressions during normal development and in autism.

  12. Facial age affects emotional expression decoding

    OpenAIRE

    2014-01-01

    Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions. Previous studies have often followed up this phenomenon by examining the effect of the observers' age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and fo...

  13. Facial age affects emotional expression decoding

    OpenAIRE

    2014-01-01

    Facial expressions convey important information on emotional states of our interaction partners. However, in interactions between younger and older adults, there is evidence for a reduced ability to accurately decode emotional facial expressions.Previous studies have often followed up this phenomenon by examining the effect of the observers’ age. However, decoding emotional faces is also likely to be influenced by stimulus features, and age-related changes in the face such as wrinkles and fol...

  14. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  15. The identification of unfolding facial expressions.

    Science.gov (United States)

    Fiorentini, Chiara; Schmidt, Susanna; Viviani, Paolo

    2012-01-01

    We asked whether the identification of emotional facial expressions (FEs) involves the simultaneous perception of the facial configuration or the detection of emotion-specific diagnostic cues. We recorded at high speed (500 frames s-1) the unfolding of the FE in five actors, each expressing six emotions (anger, surprise, happiness, disgust, fear, sadness). Recordings were coded every 10 frames (20 ms of real time) with the Facial Action Coding System (FACS, Ekman et al 2002, Salt Lake City, UT: Research Nexus eBook) to identify the facial actions contributing to each expression, and their intensity changes over time. Recordings were shown in slow motion (1/20 of recording speed) to one hundred observers in a forced-choice identification task. Participants were asked to identify the emotion during the presentation as soon as they felt confident to do so. Responses were recorded along with the associated response times (RTs). The RT probability density functions for both correct and incorrect responses were correlated with the facial activity during the presentation. There were systematic correlations between facial activities, response probabilities, and RT peaks, and significant differences in RT distributions for correct and incorrect answers. The results show that a reliable response is possible long before the full FE configuration is reached. This suggests that identification is reached by integrating in time individual diagnostic facial actions, and does not require perceiving the full apex configuration.

  16. Rapid influence of emotional scenes on encoding of facial expressions: an ERP study

    OpenAIRE

    Righart, R. G. R.; de Gelder, B.

    2008-01-01

    In daily life, we perceive a person's facial reaction as part of the natural environment surrounding it. Because most studies have investigated how facial expressions are recognized by using isolated faces, it is unclear what role the context plays. Although it has been observed that the N170 for facial expressions is modulated by the emotional context, it was not clear whether individuals use context information on this stage of processing to discriminate between facial expressions. The aim ...

  17. Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia.

    Science.gov (United States)

    Palermo, Romina; Willis, Megan L; Rivolta, Davide; McKone, Elinor; Wilson, C Ellie; Calder, Andrew J

    2011-04-01

    We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and 'social'). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability.

  18. Beta event-related desynchronization as an index of individual differences in processing human facial expression: further investigations of autistic traits in typically developing adults.

    Directory of Open Access Journals (Sweden)

    Nicholas Robert Cooper

    2013-04-01

    Full Text Available The human mirror neuron system (hMNS has been associated with various forms of social cognition and affective processing including vicarious experience. It has also been proposed that a faulty hMNS may underlie some of the deficits seen in the autism spectrum disorders. In the present study we set out to investigate whether emotional facial expressions could modulate a putative EEG index of hMNS activation (mu suppression and if so, would this differ according to the individual level of autistic traits (high versus low AQ score.Participants were presented with 3 second films of actors opening and closing their hands (classic hMNS mu-suppression protocol while simultaneously wearing happy, angry or neutral expressions. Mu-suppression was measured in the alpha and low beta bands. The low AQ group displayed greater low beta ERD to both angry and neutral expressions. The high AQ group displayed greater low beta ERD to angry than to happy expressions. There was also significantly more low beta ERD to happy faces for the low than for the high AQ group.In conclusion, an interesting interaction between AQ group and emotional expression revealed that hMNS activation can be modulated by emotional facial expressions and that this is differentiated according to individual differences in the level of autistic traits. The EEG index of hMNS activation (mu suppression seems to be a sensitive measure of the variability in facial processing in typically developing individuals with high and low self reported traits of autism.

  19. Factors contributing to individual differences in facial expression categorisation.

    Science.gov (United States)

    Green, Corinne; Guo, Kun

    2016-12-29

    Individuals vary in perceptual accuracy when categorising facial expressions, yet it is unclear how these individual differences in non-clinical population are related to cognitive processing stages at facial information acquisition and interpretation. We tested 104 healthy adults in a facial expression categorisation task, and correlated their categorisation accuracy with face-viewing gaze allocation and personal traits assessed with Autism Quotient, anxiety inventory and Self-Monitoring Scale. The gaze allocation had limited but emotion-specific impact on categorising expressions. Specifically, longer gaze at the eyes and nose regions were coupled with more accurate categorisation of disgust and sad expressions, respectively. Regarding trait measurements, higher autistic score was coupled with better recognition of sad but worse recognition of anger expressions, and contributed to categorisation bias towards sad expressions; whereas higher anxiety level was associated with greater categorisation accuracy across all expressions and with increased tendency of gazing at the nose region. It seems that both anxiety and autistic-like traits were associated with individual variation in expression categorisation, but this association is not necessarily mediated by variation in gaze allocation at expression-specific local facial regions. The results suggest that both facial information acquisition and interpretation capabilities contribute to individual differences in expression categorisation within non-clinical populations.

  20. Facial expression recognition in perceptual color space.

    Science.gov (United States)

    Lajevardi, Seyed Mehdi; Wu, Hong Ren

    2012-08-01

    This paper introduces a tensor perceptual color framework (TPCF) for facial expression recognition (FER), which is based on information contained in color facial images. The TPCF enables multi-linear image analysis in different color spaces and demonstrates that color components provide additional information for robust FER. Using this framework, the components (in either RGB, YCbCr, CIELab or CIELuv space) of color images are unfolded to two-dimensional (2- D) tensors based on multi-linear algebra and tensor concepts, from which the features are extracted by Log-Gabor filters. The mutual information quotient (MIQ) method is employed for feature selection. These features are classified using a multi-class linear discriminant analysis (LDA) classifier. The effectiveness of color information on FER using low-resolution and facial expression images with illumination variations is assessed for performance evaluation. Experimental results demonstrate that color information has significant potential to improve emotion recognition performance due to the complementary characteristics of image textures. Furthermore, the perceptual color spaces (CIELab and CIELuv) are better overall for facial expression recognition than other color spaces by providing more efficient and robust performance for facial expression recognition using facial images with illumination variation.

  1. Slowing down facial movements and vocal sounds enhances facial expression recognition and facial-vocal imitation in children with autism

    OpenAIRE

    Tardif, Carole; Lainé, France; Rodriguez, Mélissa; Gepner, Bruno

    2007-01-01

    International audience; This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on CD-Rom, under audio or silent conditions, and under dynamic visual conditions (slowly, very slowly, at normal speed) plus a st...

  2. Mothers' pupillary responses to infant facial expressions.

    Science.gov (United States)

    Yrttiaho, Santeri; Niehaus, Dana; Thomas, Eileen; Leppänen, Jukka M

    2017-02-06

    Human parental care relies heavily on the ability to monitor and respond to a child's affective states. The current study examined pupil diameter as a potential physiological index of mothers' affective response to infant facial expressions. Pupillary time-series were measured from 86 mothers of young infants in response to an array of photographic infant faces falling into four emotive categories based on valence (positive vs. negative) and arousal (mild vs. strong). Pupil dilation was highly sensitive to the valence of facial expressions, being larger for negative vs. positive facial expressions. A separate control experiment with luminance-matched non-face stimuli indicated that the valence effect was specific to facial expressions and cannot be explained by luminance confounds. Pupil response was not sensitive to the arousal level of facial expressions. The results show the feasibility of using pupil diameter as a marker of mothers' affective responses to ecologically valid infant stimuli and point to a particularly prompt maternal response to infant distress cues.

  3. Disconnection mechanism and regional cortical atrophy contribute to impaired processing of facial expressions and theory of mind in multiple sclerosis: a structural MRI study.

    Directory of Open Access Journals (Sweden)

    Andrea Mike

    Full Text Available Successful socialization requires the ability of understanding of others' mental states. This ability called as mentalization (Theory of Mind may become deficient and contribute to everyday life difficulties in multiple sclerosis. We aimed to explore the impact of brain pathology on mentalization performance in multiple sclerosis. Mentalization performance of 49 patients with multiple sclerosis was compared to 24 age- and gender matched healthy controls. T1- and T2-weighted three-dimensional brain MRI images were acquired at 3Tesla from patients with multiple sclerosis and 18 gender- and age matched healthy controls. We assessed overall brain cortical thickness in patients with multiple sclerosis and the scanned healthy controls, and measured the total and regional T1 and T2 white matter lesion volumes in patients with multiple sclerosis. Performances in tests of recognition of mental states and emotions from facial expressions and eye gazes correlated with both total T1-lesion load and regional T1-lesion load of association fiber tracts interconnecting cortical regions related to visual and emotion processing (genu and splenium of corpus callosum, right inferior longitudinal fasciculus, right inferior fronto-occipital fasciculus, uncinate fasciculus. Both of these tests showed correlations with specific cortical areas involved in emotion recognition from facial expressions (right and left fusiform face area, frontal eye filed, processing of emotions (right entorhinal cortex and socially relevant information (left temporal pole. Thus, both disconnection mechanism due to white matter lesions and cortical thinning of specific brain areas may result in cognitive deficit in multiple sclerosis affecting emotion and mental state processing from facial expressions and contributing to everyday and social life difficulties of these patients.

  4. Simultaneous recording of EEG and fNIRS during visuo-spatial and facial expression processing in a dual task paradigm.

    Science.gov (United States)

    Herrmann, Martin J; Neueder, Dorothea; Troeller, Anna K; Schulz, Stefan M

    2016-11-01

    Emotional processing is probably the most crucial tool for orienting oneself in our everyday social life and has been considered to be highly automatic for a long time. Dual task (DT) research shows that information competing for working memory resources impairs the identification of emotional facial expressions. Effects of cognitive load in DT paradigms have been confirmed in numerous neuroimaging studies. However, interference occurring during a DT comprised of decoding emotional facial expressions and a visuo-spatial working memory task has yet to be visualized. To investigate the DT interference effect on brain areas associated not only with working memory, but also emotional and visuo-spatial processing, we recorded brain activation within the prefrontal cortex and parietal-occipital sensory areas using functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG) simultaneously. Our study consisted of N = 36 participants (27 female) performing the following tasks: a) Corsi blocks, b) identification of emotional facial expressions or c) DT comprising of tasks a) and b). We predicted higher activation of the prefrontal cortex during DT and corresponding reduced P100 and P300 amplitudes. As expected, fNIRS measurements revealed significantly higher neuronal activation within the prefrontal cortex in the DT condition. When comparing DT to the single tasks, the P100 amplitude was reduced, but the P300 amplitude did not show the expected reduction. Our findings underline that at least some aspects of emotional processing are not entirely automatic, but depend on prefrontal control and are therefore affected by cognitive load, in particular visuo-spatial working memory resources.

  5. A modulatory role for facial expressions in prosopagnosia

    NARCIS (Netherlands)

    de Gelder, B.; Frissen, I.H.E.; Barton, J.; Hadjikhani, N.K.

    2003-01-01

    Brain-damaged patients experience difficulties in recognizing a face (prosopagnosics), but they can still recognize its expression. The dissociation between these two face-related skills has served as a keystone of models of face processing. We now report that the presence of a facial expression can

  6. Biased Facial Expression Interpretation in Shy Children

    Science.gov (United States)

    Kokin, Jessica; Younger, Alastair; Gosselin, Pierre; Vaillancourt, Tracy

    2016-01-01

    The relationship between shyness and the interpretations of the facial expressions of others was examined in a sample of 123 children aged 12 to 14?years. Participants viewed faces displaying happiness, fear, anger, disgust, sadness, surprise, as well as a neutral expression, presented on a computer screen. The children identified each expression…

  7. A comparison of facial expression properties in five hylobatid species

    NARCIS (Netherlands)

    Scheider, Linda; Liebal, Katja; Oña, Leonardo; Burrows, Anne; Waller, Bridget

    2014-01-01

    Little is known about facial communication of lesser apes (family Hylobatidae) and how their facial expressions (and use of) relate to social organization. We investigated facial expressions (defined as combinations of facial movements) in social interactions of mated pairs in five different hylobat

  8. Do Dynamic Compared to Static Facial Expressions of Happiness and Anger Reveal Enhanced Facial Mimicry?

    Directory of Open Access Journals (Sweden)

    Krystyna Rymarczyk

    Full Text Available Facial mimicry is the spontaneous response to others' facial expressions by mirroring or matching the interaction partner. Recent evidence suggested that mimicry may not be only an automatic reaction but could be dependent on many factors, including social context, type of task in which the participant is engaged, or stimulus properties (dynamic vs static presentation. In the present study, we investigated the impact of dynamic facial expression and sex differences on facial mimicry and judgment of emotional intensity. Electromyography recordings were recorded from the corrugator supercilii, zygomaticus major, and orbicularis oculi muscles during passive observation of static and dynamic images of happiness and anger. The ratings of the emotional intensity of facial expressions were also analysed. As predicted, dynamic expressions were rated as more intense than static ones. Compared to static images, dynamic displays of happiness also evoked stronger activity in the zygomaticus major and orbicularis oculi, suggesting that subjects experienced positive emotion. No muscles showed mimicry activity in response to angry faces. Moreover, we found that women exhibited greater zygomaticus major muscle activity in response to dynamic happiness stimuli than static stimuli. Our data support the hypothesis that people mimic positive emotions and confirm the importance of dynamic stimuli in some emotional processing.

  9. Facial Expression Recognition Based on WAPA and OEPA Fastica

    Directory of Open Access Journals (Sweden)

    Humayra Binte Ali

    2014-06-01

    Full Text Available Face is one of the most important biometric traits for its uniqueness and robustness. For this reason researchers from many diversified fields, like: security, psychology, image processing, and computer vision, started to do research on face detection as well as facial expression recognition. Subspace learning methods work very good for recognizing same facial features. Among subspace learning techniques PCA, ICA, NMF are the most prominent topics. In this work, our main focus is on Independent Component Analysis (ICA. Among several architectures of ICA,we used here FastICA and LS-ICA algorithm. We applied Fast-ICA on whole faces and on different facial parts to analyze the influence of different parts for basic facial expressions. Our extended algorithm WAPA-FastICA and OEPA-FastICA are discussed in proposed algorithm section. Locally Salient ICA is implemented on whole face by using 8x8 windows to find the more prominent facial features for facial expression. The experiment shows our proposed OEPA-FastICA and WAPA-FastICA outperform the existing prevalent Whole-FastICA and LS-ICA methods.

  10. Facial and vocal expressions of emotion.

    Science.gov (United States)

    Russell, James A; Bachorowski, Jo-Anne; Fernandez-Dols, Jose-Miguel

    2003-01-01

    A flurry of theoretical and empirical work concerning the production of and response to facial and vocal expressions has occurred in the past decade. That emotional expressions express emotions is a tautology but may not be a fact. Debates have centered on universality, the nature of emotion, and the link between emotions and expressions. Modern evolutionary theory is informing more models, emphasizing that expressions are directed at a receiver, that the interests of sender and receiver can conflict, that there are many determinants of sending an expression in addition to emotion, that expressions influence the receiver in a variety of ways, and that the receiver's response is more than simply decoding a message.

  11. Three-year-olds' rapid facial electromyographic responses to emotional facial expressions and body postures.

    Science.gov (United States)

    Geangu, Elena; Quadrelli, Ermanno; Conte, Stefania; Croci, Emanuela; Turati, Chiara

    2016-04-01

    Rapid facial reactions (RFRs) to observed emotional expressions are proposed to be involved in a wide array of socioemotional skills, from empathy to social communication. Two of the most persuasive theoretical accounts propose RFRs to rely either on motor resonance mechanisms or on more complex mechanisms involving affective processes. Previous studies demonstrated that presentation of facial and bodily expressions can generate rapid changes in adult and school-age children's muscle activity. However, to date there is little to no evidence to suggest the existence of emotional RFRs from infancy to preschool age. To investigate whether RFRs are driven by motor mimicry or could also be a result of emotional appraisal processes, we recorded facial electromyographic (EMG) activation from the zygomaticus major and frontalis medialis muscles to presentation of static facial and bodily expressions of emotions (i.e., happiness, anger, fear, and neutral) in 3-year-old children. Results showed no specific EMG activation in response to bodily emotion expressions. However, observing others' happy faces led to increased activation of the zygomaticus major and decreased activation of the frontalis medialis, whereas observing others' angry faces elicited the opposite pattern of activation. This study suggests that RFRs are the result of complex mechanisms in which both affective processes and motor resonance may play an important role. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Emotional facial expression detection in the peripheral visual field.

    Directory of Open Access Journals (Sweden)

    Dimitri J Bayle

    Full Text Available BACKGROUND: In everyday life, signals of danger, such as aversive facial expressions, usually appear in the peripheral visual field. Although facial expression processing in central vision has been extensively studied, this processing in peripheral vision has been poorly studied. METHODOLOGY/PRINCIPAL FINDINGS: Using behavioral measures, we explored the human ability to detect fear and disgust vs. neutral expressions and compared it to the ability to discriminate between genders at eccentricities up to 40°. Responses were faster for the detection of emotion compared to gender. Emotion was detected from fearful faces up to 40° of eccentricity. CONCLUSIONS: Our results demonstrate the human ability to detect facial expressions presented in the far periphery up to 40° of eccentricity. The increasing advantage of emotion compared to gender processing with increasing eccentricity might reflect a major implication of the magnocellular visual pathway in facial expression processing. This advantage may suggest that emotion detection, relative to gender identification, is less impacted by visual acuity and within-face crowding in the periphery. These results are consistent with specific and automatic processing of danger-related information, which may drive attention to those messages and allow for a fast behavioral reaction.

  13. Extraction of Subject-Specific Facial Expression Categories and Generation of Facial Expression Feature Space using Self-Mapping

    Directory of Open Access Journals (Sweden)

    Masaki Ishii

    2008-06-01

    Full Text Available This paper proposes a generation method of a subject-specific Facial Expression Map (FEMap using the Self-Organizing Maps (SOM of unsupervised learning and Counter Propagation Networks (CPN of supervised learning together. The proposed method consists of two steps. In the first step, the topological change of a face pattern in the expressional process of facial expression is learned hierarchically using the SOM of a narrow mapping space, and the number of subject-specific facial expression categories and the representative images of each category are extracted. Psychological significance based on the neutral and six basic emotions (anger, sadness, disgust, happiness, surprise, and fear is assigned to each extracted category. In the latter step, the categories and the representative images described above are learned using the CPN of a large mapping space, and a category map that expresses the topological characteristics of facial expression is generated. This paper defines this category map as an FEMap. Experimental results for six subjects show that the proposed method can generate a subject-specific FEMap based on the topological characteristics of facial expression appearing on face images.

  14. Comparison of emotion recognition from facial expression and music.

    Science.gov (United States)

    Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.

  15. A Robot with Complex Facial Expressions

    Directory of Open Access Journals (Sweden)

    J. Takeno

    2009-08-01

    Full Text Available The authors believe that the consciousness of humans basically originates from languages and their association-like flow of consciousness, and that feelings are generated accompanying respective languages. We incorporated artificial consciousness into a robot; achieved an association flow of language like flow of consciousness; and developed a robot called Kansei that expresses its feelings according to the associations occurring in the robot. To be able to fully communicate with humans, robots must be able to display complex expressions, such as a sense of being thrilled. We therefore added to the Kansei robot a device to express complex feelings through its facial expressions. The Kansei robot is actually an artificial skull made of aluminum, with servomotors built into it. The face is made of relatively soft polyethylene, which is formed to appear like a human face. Facial expressions are generated using 19 servomotors built into the skull, which pull metal wires attached to the facial “skin” to create expressions. The robot at present is capable of making six basic expressions as well as complex expressions, such as happiness and fear combined.

  16. The face of leadership: perceiving leaders from facial expression

    OpenAIRE

    Trichas, Savvas

    2011-01-01

    Facial expressions appear to have a powerful influence on the perception of leadership. The aim of the five studies presented here was to add to our knowledge about the contribution of facial expression to the perception of leadership. In particular, these five studies were used to explore which facial expressions influence perceptions of leadership and how these facial expressions influence leadership perceptions. Participants’ prototypes of leadership were examined by assessing implicit lea...

  17. The face of leadership: perceiving leaders from facial expression

    OpenAIRE

    Trichas, Savvas

    2011-01-01

    Facial expressions appear to have a powerful influence on the perception of leadership. The aim of the five studies presented here was to add to our knowledge about the contribution of facial expression to the perception of leadership. In particular, these five studies were used to explore which facial expressions influence perceptions of leadership and how these facial expressions influence leadership perceptions. Participants’ prototypes of leadership were examined by assessing implicit lea...

  18. Stability of Facial Affective Expressions in Schizophrenia

    Directory of Open Access Journals (Sweden)

    H. Fatouros-Bergman

    2012-01-01

    Full Text Available Thirty-two videorecorded interviews were conducted by two interviewers with eight patients diagnosed with schizophrenia. Each patient was interviewed four times: three weekly interviews by the first interviewer and one additional interview by the second interviewer. 64 selected sequences where the patients were speaking about psychotic experiences were scored for facial affective behaviour with Emotion Facial Action Coding System (EMFACS. In accordance with previous research, the results show that patients diagnosed with schizophrenia express negative facial affectivity. Facial affective behaviour seems not to be dependent on temporality, since within-subjects ANOVA revealed no substantial changes in the amount of affects displayed across the weekly interview occasions. Whereas previous findings found contempt to be the most frequent affect in patients, in the present material disgust was as common, but depended on the interviewer. The results suggest that facial affectivity in these patients is primarily dominated by the negative emotions of disgust and, to a lesser extent, contempt and implies that this seems to be a fairly stable feature.

  19. ALTERED KINEMATICS OF FACIAL EMOTION EXPRESSION AND EMOTION RECOGNITION DEFICITS ARE UNRELATED IN PARKINSON'S DISEASE

    Directory of Open Access Journals (Sweden)

    Matteo Bologna

    2016-12-01

    Full Text Available Background: Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson’s disease (PD. However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. Objective: To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Methods: Eighteen patients with PD and 16 healthy controls were enrolled in the study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analysed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores and clinical and demographic data in patients were evaluated using the Spearman’s test and multiple regression analysis.Results: The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all Ps0.05. Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores and clinical and demographic data in patients (all Ps>0.05.Conclusion: The present results provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.

  20. Categorical Perception of Affective and Linguistic Facial Expressions

    Science.gov (United States)

    McCullough, Stephen; Emmorey, Karen

    2009-01-01

    Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX…

  1. Violent Media Consumption and the Recognition of Dynamic Facial Expressions

    Science.gov (United States)

    Kirsh, Steven J.; Mounts, Jeffrey R. W.; Olczak, Paul V.

    2006-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph.…

  2. Violent Media Consumption and the Recognition of Dynamic Facial Expressions

    Science.gov (United States)

    Kirsh, Steven J.; Mounts, Jeffrey R. W.; Olczak, Paul V.

    2006-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph.…

  3. Teachers' Perception Regarding Facial Expressions as an Effective Teaching Tool

    Science.gov (United States)

    Butt, Muhammad Naeem; Iqbal, Mohammad

    2011-01-01

    The major objective of the study was to explore teachers' perceptions about the importance of facial expression in the teaching-learning process. All the teachers of government secondary schools constituted the population of the study. A sample of 40 teachers, both male and female, in rural and urban areas of district Peshawar, were selected…

  4. Categorical Representation of Facial Expressions in the Infant Brain

    Science.gov (United States)

    Leppanen, Jukka M.; Richmond, Jenny; Vogel-Farley, Vanessa K.; Moulson, Margaret C.; Nelson, Charles A.

    2009-01-01

    Categorical perception, demonstrated as reduced discrimination of within-category relative to between-category differences in stimuli, has been found in a variety of perceptual domains in adults. To examine the development of categorical perception in the domain of facial expression processing, we used behavioral and event-related potential (ERP)…

  5. Reading faces: differential lateral gaze bias in processing canine and human facial expressions in dogs and 4-year-old children.

    Directory of Open Access Journals (Sweden)

    Anaïs Racca

    Full Text Available Sensitivity to the emotions of others provides clear biological advantages. However, in the case of heterospecific relationships, such as that existing between dogs and humans, there are additional challenges since some elements of the expression of emotions are species-specific. Given that faces provide important visual cues for communicating emotional state in both humans and dogs, and that processing of emotions is subject to brain lateralisation, we investigated lateral gaze bias in adult dogs when presented with pictures of expressive human and dog faces. Our analysis revealed clear differences in laterality of eye movements in dogs towards conspecific faces according to the emotional valence of the expressions. Differences were also found towards human faces, but to a lesser extent. For comparative purpose, a similar experiment was also run with 4-year-old children and it was observed that they showed differential processing of facial expressions compared to dogs, suggesting a species-dependent engagement of the right or left hemisphere in processing emotions.

  6. Time perception and dynamics of facial expressions of emotions.

    Directory of Open Access Journals (Sweden)

    Sophie L Fayolle

    Full Text Available Two experiments were run to examine the effects of dynamic displays of facial expressions of emotions on time judgments. The participants were given a temporal bisection task with emotional facial expressions presented in a dynamic or a static display. Two emotional facial expressions and a neutral expression were tested and compared. Each of the emotional expressions had the same affective valence (unpleasant, but one was high-arousing (expressing anger and the other low-arousing (expressing sadness. Our results showed that time judgments are highly sensitive to movements in facial expressions and the emotions expressed. Indeed, longer perceived durations were found in response to the dynamic faces and the high-arousing emotional expressions compared to the static faces and low-arousing expressions. In addition, the facial movements amplified the effect of emotions on time perception. Dynamic facial expressions are thus interesting tools for examining variations in temporal judgments in different social contexts.

  7. Meta-Analysis of the First Facial Expression Recognition Challenge

    NARCIS (Netherlands)

    Valstar, M.F.; Mehu, M.; Jiang, Bihan; Pantic, Maja; Scherer, K.

    2012-01-01

    Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability

  8. Meta-Analysis of the First Facial Expression Recognition Challenge

    NARCIS (Netherlands)

    Valstar, M.F.; Mehu, M.; Jiang, Bihan; Pantic, Maja; Scherer, K.

    Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability

  9. N170 sensitivity to facial expression: A meta-analysis.

    Science.gov (United States)

    Hinojosa, J A; Mercado, F; Carretié, L

    2015-08-01

    The N170 component is the most important electrophysiological index of face processing. Early studies concluded that it was insensitive to facial expression, thus supporting dual theories postulating separate mechanisms for identity and expression encoding. However, recent evidence contradicts this assumption. We conducted a meta-analysis to resolve inconsistencies and to derive theoretical implications. A systematic revision of 128 studies analyzing N170 in response to neutral and emotional expressions yielded 57 meta-analyzable experiments (involving 1645 healthy adults). First, the N170 was found to be sensitive to facial expressions, supporting proposals arguing for integrated rather than segregated mechanisms in the processing of identity and expression. Second, this sensitivity is heterogeneous, with anger, fear and happy faces eliciting the largest N170 amplitudes. Third, we explored some modulatory factors, including the focus of attention - N170 amplitude was found to be also sensitive to unattended expressions - or the reference electrode -common reference reinforcing the effects- . In sum, N170 is a valuable tool to study the neural processing of facial expressions in order to develop current theories. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. The Enfacement Illusion Is Not Affected by Negative Facial Expressions.

    Science.gov (United States)

    Beck, Brianna; Cardini, Flavia; Làdavas, Elisabetta; Bertini, Caterina

    2015-01-01

    Enfacement is an illusion wherein synchronous visual and tactile inputs update the mental representation of one's own face to assimilate another person's face. Emotional facial expressions, serving as communicative signals, may influence enfacement by increasing the observer's motivation to understand the mental state of the expresser. Fearful expressions, in particular, might increase enfacement because they are valuable for adaptive behavior and more strongly represented in somatosensory cortex than other emotions. In the present study, a face was seen being touched at the same time as the participant's own face. This face was either neutral, fearful, or angry. Anger was chosen as an emotional control condition for fear because it is similarly negative but induces less somatosensory resonance, and requires additional knowledge (i.e., contextual information and social contingencies) to effectively guide behavior. We hypothesized that seeing a fearful face (but not an angry one) would increase enfacement because of greater somatosensory resonance. Surprisingly, neither fearful nor angry expressions modulated the degree of enfacement relative to neutral expressions. Synchronous interpersonal visuo-tactile stimulation led to assimilation of the other's face, but this assimilation was not modulated by facial expression processing. This finding suggests that dynamic, multisensory processes of self-face identification operate independently of facial expression processing.

  11. Interference between conscious and unconscious facial expression information.

    Directory of Open Access Journals (Sweden)

    Xing Ye

    Full Text Available There is ample evidence to show that many types of visual information, including emotional information, could be processed in the absence of visual awareness. For example, it has been shown that masked subliminal facial expressions can induce priming and adaptation effects. However, stimulus made invisible in different ways could be processed to different extent and have differential effects. In this study, we adopted a flanker type behavioral method to investigate whether a flanker rendered invisible through Continuous Flash Suppression (CFS could induce a congruency effect on the discrimination of a visible target. Specifically, during the experiment, participants judged the expression (either happy or fearful of a visible face in the presence of a nearby invisible face (with happy or fearful expression. Results show that participants were slower and less accurate in discriminating the expression of the visible face when the expression of the invisible flanker face was incongruent. Thus, facial expression information rendered invisible with CFS and presented a different spatial location could enhance or interfere with consciously processed facial expression information.

  12. Drug effects on responses to emotional facial expressions: recent findings.

    Science.gov (United States)

    Miller, Melissa A; Bershad, Anya K; de Wit, Harriet

    2015-09-01

    Many psychoactive drugs increase social behavior and enhance social interactions, which may, in turn, increase their attractiveness to users. Although the psychological mechanisms by which drugs affect social behavior are not fully understood, there is some evidence that drugs alter the perception of emotions in others. Drugs can affect the ability to detect, attend to, and respond to emotional facial expressions, which in turn may influence their use in social settings. Either increased reactivity to positive expressions or decreased response to negative expressions may facilitate social interaction. This article reviews evidence that psychoactive drugs alter the processing of emotional facial expressions using subjective, behavioral, and physiological measures. The findings lay the groundwork for better understanding how drugs alter social processing and social behavior more generally.

  13. Comparison of Emotion Recognition from Facial Expression and Music

    OpenAIRE

    Gašpar, Tina; Labor, Marina; Jurić, Iva; Dumančić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recogni...

  14. Comparison of Emotion Recognition from Facial Expression and Music

    OpenAIRE

    Gašpar, Tina; Labor, Marina; Jurić, Iva; Dumančić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recogni...

  15. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  16. Teaching Facial Expressions of Emotion

    Science.gov (United States)

    Lierheimer, Kristin; Stichter, Janine

    2011-01-01

    Many students with exceptionalities, such as Aaron, display challenging behaviors that have a substantial impact on their quality of life and greatly disrupt the educational process (Matson, Wilkins, & Macken, 2009; Westling, 2010). Challenging behavior often causes academic and social difficulties for the students displaying the behavior,…

  17. Automatic Recognition of Facial Actions in Spontaneous Expressions

    Directory of Open Access Journals (Sweden)

    Marian Stewart Bartlett

    2006-09-01

    Full Text Available Spontaneous facial expressions differ from posed expressions in both which muscles are moved, and in the dynamics of the movement. Advances in the field of automatic facial expression measurement will require development and assessment on spontaneous behavior. Here we present preliminary results on a task of facial action detection in spontaneous facial expressions. We employ a user independent fully automatic system for real time recognition of facial actions from the Facial Action Coding System (FACS. The system automatically detects frontal faces in the video stream and coded each frame with respect to 20 Action units. The approach applies machine learning methods such as support vector machines and AdaBoost, to texture-based image representations. The output margin for the learned classifiers predicts action unit intensity. Frame-by-frame intensity measurements will enable investigations into facial expression dynamics which were previously intractable by human coding.

  18. Allometry of facial mobility in anthropoid primates: implications for the evolution of facial expression.

    Science.gov (United States)

    Dobson, Seth D

    2009-01-01

    Body size may be an important factor influencing the evolution of facial expression in anthropoid primates due to allometric constraints on the perception of facial movements. Given this hypothesis, I tested the prediction that observed facial mobility is positively correlated with body size in a comparative sample of nonhuman anthropoids. Facial mobility, or the variety of facial movements a species can produce, was estimated using a novel application of the Facial Action Coding System (FACS). I used FACS to estimate facial mobility in 12 nonhuman anthropoid species, based on video recordings of facial activity in zoo animals. Body mass data were taken from the literature. I used phylogenetic generalized least squares (PGLS) to perform a multiple regression analysis with facial mobility as the dependent variable and two independent variables: log body mass and dummy-coded infraorder. Together, body mass and infraorder explain 92% of the variance in facial mobility. However, the partial effect of body mass is much stronger than for infraorder. The results of my study suggest that allometry is an important constraint on the evolution of facial mobility, which may limit the complexity of facial expression in smaller species. More work is needed to clarify the perceptual bases of this allometric pattern.

  19. A Developmental Examination of Amygdala Response to Facial Expressions

    OpenAIRE

    Guyer, Amanda E.; Monk, Christopher S.; McClure-Tone, Erin B.; Nelson, Eric E.; Roberson-Nay, Roxann; Adler, Abby D.; Fromm, Stephen J.; Leibenluft, Ellen; Daniel S Pine; Ernst, Monique

    2008-01-01

    Several lines of evidence implicate the amygdala in face– emotion processing, particularly for fearful facial expressions. Related findings suggest that face–emotion processing engages the amygdala within an interconnected circuitry that can be studied using a functional-connectivity approach. Past work also underscores important functional changes in the amygdala during development. Taken together, prior research on amygdala function and development reveals a need for more work examining dev...

  20. Automated Facial Action Coding System for dynamic analysis of facial expressions in neuropsychiatric disorders.

    Science.gov (United States)

    Hamm, Jihun; Kohler, Christian G; Gur, Ruben C; Verma, Ragini

    2011-09-15

    Facial expression is widely used to evaluate emotional impairment in neuropsychiatric disorders. Ekman and Friesen's Facial Action Coding System (FACS) encodes movements of individual facial muscles from distinct momentary changes in facial appearance. Unlike facial expression ratings based on categorization of expressions into prototypical emotions (happiness, sadness, anger, fear, disgust, etc.), FACS can encode ambiguous and subtle expressions, and therefore is potentially more suitable for analyzing the small differences in facial affect. However, FACS rating requires extensive training, and is time consuming and subjective thus prone to bias. To overcome these limitations, we developed an automated FACS based on advanced computer science technology. The system automatically tracks faces in a video, extracts geometric and texture features, and produces temporal profiles of each facial muscle movement. These profiles are quantified to compute frequencies of single and combined Action Units (AUs) in videos, and they can facilitate a statistical study of large populations in disorders known to impact facial expression. We derived quantitative measures of flat and inappropriate facial affect automatically from temporal AU profiles. Applicability of the automated FACS was illustrated in a pilot study, by applying it to data of videos from eight schizophrenia patients and controls. We created temporal AU profiles that provided rich information on the dynamics of facial muscle movements for each subject. The quantitative measures of flatness and inappropriateness showed clear differences between patients and the controls, highlighting their potential in automatic and objective quantification of symptom severity. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Recognition of facial expressions of different emotional intensities in patients with frontotemporal lobar degeneration

    NARCIS (Netherlands)

    Kessels, Roy P. C.; Gerritsen, Lotte; Montagne, Barbara; Ackl, Nibal; Diehl, Janine; Danek, Adrian

    2007-01-01

    Behavioural problems are a key feature of frontotemporal lobar degeneration (FTLD). Also, FTLD patients show impairments in emotion processing. Specifically, the perception of negative emotional facial expressions is affected. Generally, however, negative emotional expressions are regarded as more d

  2. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    Science.gov (United States)

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  3. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    Science.gov (United States)

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  4. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    Science.gov (United States)

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions.

  5. Face in profile view reduces perceived facial expression intensity: an eye-tracking study.

    Science.gov (United States)

    Guo, Kun; Shaw, Heather

    2015-02-01

    Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues.

  6. Facial Edema Evaluation Using Digital Image Processing

    Directory of Open Access Journals (Sweden)

    A. E. Villafuerte-Nuñez

    2013-01-01

    Full Text Available The main objective of the facial edema evaluation is providing the needed information to determine the effectiveness of the anti-inflammatory drugs in development. This paper presents a system that measures the four main variables present in facial edemas: trismus, blush (coloration, temperature, and inflammation. Measurements are obtained by using image processing and the combination of different devices such as a projector, a PC, a digital camera, a thermographic camera, and a cephalostat. Data analysis and processing are performed using MATLAB. Facial inflammation is measured by comparing three-dimensional reconstructions of inflammatory variations using the fringe projection technique. Trismus is measured by converting pixels to centimeters in a digitally obtained image of an open mouth. Blushing changes are measured by obtaining and comparing the RGB histograms from facial edema images at different times. Finally, temperature changes are measured using a thermographic camera. Some tests using controlled measurements of every variable are presented in this paper. The results allow evaluating the measurement system before its use in a real test, using the pain model approved by the US Food and Drug Administration (FDA, which consists in extracting the third molar to generate the facial edema.

  7. Scattered Data Processing Approach Based on Optical Facial Motion Capture

    Directory of Open Access Journals (Sweden)

    Qiang Zhang

    2013-01-01

    Full Text Available In recent years, animation reconstruction of facial expressions has become a popular research field in computer science and motion capture-based facial expression reconstruction is now emerging in this field. Based on the facial motion data obtained using a passive optical motion capture system, we propose a scattered data processing approach, which aims to solve the common problems of missing data and noise. To recover missing data, given the nonlinear relationships among neighbors with the current missing marker, we propose an improved version of a previous method, where we use the motion of three muscles rather than one to recover the missing data. To reduce the noise, we initially apply preprocessing to eliminate impulsive noise, before our proposed three-order quasi-uniform B-spline-based fitting method is used to reduce the remaining noise. Our experiments showed that the principles that underlie this method are simple and straightforward, and it delivered acceptable precision during reconstruction.

  8. Adults' responsiveness to children's facial expressions.

    Science.gov (United States)

    Aradhye, Chinmay; Vonk, Jennifer; Arida, Danielle

    2015-07-01

    We investigated the effect of young children's (hereafter children's) facial expressions on adult responsiveness. In Study 1, 131 undergraduate students from a midsized university in the midwestern United States rated children's images and videos with smiling, crying, or neutral expressions on cuteness, likelihood to adopt, and participants' experienced distress. Looking times at images and videos along with perception of cuteness, likelihood to adopt, and experienced distress using 10-point Likert scales were measured. Videos of smiling children were rated as cuter and more likely to be adopted and were viewed for longer times compared with videos of crying children, which evoked more distress. In Study 2, we recorded responses from 101 of the same participants in an online survey measuring gender role identity, empathy, and perspective taking. Higher levels of femininity (as measured by Bem's Sex Role Inventory) predicted higher "likely to adopt" ratings for crying images. These findings indicate that adult perception of children and motivation to nurture are affected by both children's facial expressions and adult characteristics and build on existing literature to demonstrate that children may use expressions to manipulate the motivations of even non-kin adults to direct attention toward and perhaps nurture young children.

  9. Intensity dependence in high-level facial expression adaptation aftereffect.

    Science.gov (United States)

    Hong, Sang Wook; Yoon, K Lira

    2017-06-14

    Perception of a facial expression can be altered or biased by a prolonged viewing of other facial expressions, known as the facial expression adaptation aftereffect (FEAA). Recent studies using antiexpressions have demonstrated a monotonic relation between the magnitude of the FEAA and adaptor extremity, suggesting that facial expressions are opponent coded and represented continuously from one expression to its antiexpression. However, it is unclear whether the opponent-coding scheme can account for the FEAA between two facial expressions. In the current study, we demonstrated that the magnitude of the FEAA between two facial expressions increased monotonically as a function of the intensity of adapting facial expressions, consistent with the predictions based on the opponent-coding model. Further, the monotonic increase in the FEAA occurred even when the intensity of an adapting face was too weak for its expression to be recognized. These results together suggest that multiple facial expressions are encoded and represented by balanced activity of neural populations tuned to different facial expressions.

  10. Mirroring Facial Expressions and Emotions in Dyadic Conversations

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2016-01-01

    This paper presents an investigation of mirroring facial expressions and the emotions which they convey in dyadic naturally occurring first encounters. Mirroring facial expressions are a common phenomenon in face-to-face interactions, and they are due to the mirror neuron system which has been...... and overlapping facial expressions are very frequent. In this study, we want to determine whether the overlapping facial expressions are mirrored or are otherwise correlated in the encounters, and to what extent mirroring facial expressions convey the same emotion. The results of our study show that the majority...... of smiles and laughs, and one fifth of the occurrences of raised eyebrows are mirrored in the data. Moreover some facial traits in co-occurring expressions co-occur more often than it would be expected by chance. Finally, amusement, and to a lesser extent friendliness, are often emotions shared by both...

  11. Mu desynchronization during observation and execution of facial expressions in 30-month-old children

    Directory of Open Access Journals (Sweden)

    Holly Rayson

    2016-06-01

    Full Text Available Simulation theories propose that observing another’s facial expression activates sensorimotor representations involved in the execution of that expression, facilitating recognition processes. The mirror neuron system (MNS is a potential mechanism underlying simulation of facial expressions, with like neural processes activated both during observation and performance. Research with monkeys and adult humans supports this proposal, but so far there have been no investigations of facial MNS activity early in human development. The current study used electroencephalography (EEG to explore mu rhythm desynchronization, an index of MNS activity, in 30-month-old children as they observed videos of dynamic emotional and non-emotional facial expressions, as well as scrambled versions of the same videos. We found significant mu desynchronization in central regions during observation and execution of both emotional and non-emotional facial expressions, which was right-lateralized for emotional and bilateral for non-emotional expressions during observation. These findings support previous research suggesting movement simulation during observation of facial expressions, and are the first to provide evidence for sensorimotor activation during observation of facial expressions, consistent with a functioning facial MNS at an early stage of human development.

  12. Facial expression influences face identity recognition during the attentional blink.

    Science.gov (United States)

    Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J

    2014-12-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

  13. Facial Expression Generation from Speaker's Emotional States in Daily Conversation

    Science.gov (United States)

    Mori, Hiroki; Ohshima, Koh

    A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.

  14. Impaired Overt Facial Mimicry in Response to Dynamic Facial Expressions in High-Functioning Autism Spectrum Disorders

    Science.gov (United States)

    Yoshimura, Sayaka; Sato, Wataru; Uono, Shota; Toichi, Motomi

    2015-01-01

    Previous electromyographic studies have reported that individuals with autism spectrum disorders (ASD) exhibited atypical patterns of facial muscle activity in response to facial expression stimuli. However, whether such activity is expressed in visible facial mimicry remains unknown. To investigate this issue, we videotaped facial responses in…

  15. Modulation of perception and brain activity by predictable trajectories of facial expressions.

    Science.gov (United States)

    Furl, N; van Rijsbergen, N J; Kiebel, S J; Friston, K J; Treves, A; Dolan, R J

    2010-03-01

    People track facial expression dynamics with ease to accurately perceive distinct emotions. Although the superior temporal sulcus (STS) appears to possess mechanisms for perceiving changeable facial attributes such as expressions, the nature of the underlying neural computations is not known. Motivated by novel theoretical accounts, we hypothesized that visual and motor areas represent expressions as anticipated motion trajectories. Using magnetoencephalography, we show predictable transitions between fearful and neutral expressions (compared with scrambled and static presentations) heighten activity in visual cortex as quickly as 165 ms poststimulus onset and later (237 ms) engage fusiform gyrus, STS and premotor areas. Consistent with proposed models of biological motion representation, we suggest that visual areas predictively represent coherent facial trajectories. We show that such representations bias emotion perception of subsequent static faces, suggesting that facial movements elicit predictions that bias perception. Our findings reveal critical processes evoked in the perception of dynamic stimuli such as facial expressions, which can endow perception with temporal continuity.

  16. Real Time Facial Expression Recognition Using a Novel Method

    Directory of Open Access Journals (Sweden)

    Saumil Srivastava

    2012-04-01

    Full Text Available This paper discusses a novel method for Facial Expression Recognition System which performs facial expression analysis in a near real time from a live web cam feed. Primary objectives were to get results in a near real time with light invariant, person independent and pose invariant way. The system is composed of two different entities trainer and evaluator. Each frame of video feed is passed through a series of steps including haar classifiers, skin detection, feature extraction, feature points tracking, creating a learned Support Vector Machine model to classify emotions to achieve a tradeoff between accuracy and result rate. A processing time of 100-120 ms per 10 frames was achieved with accuracy of around 60%. We measure our accuracy in terms of variety of interaction and classification scenarios. We conclude by discussing relevance of our work to human computer interaction and exploring further measures that can be taken.

  17. Shadows alter facial expressions of Noh masks.

    Science.gov (United States)

    Kawai, Nobuyuki; Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo

    2013-01-01

    A Noh mask, worn by expert actors during performance on the Japanese traditional Noh drama, conveys various emotional expressions despite its fixed physical properties. How does the mask change its expressions? Shadows change subtly during the actual Noh drama, which plays a key role in creating elusive artistic enchantment. We here describe evidence from two experiments regarding how attached shadows of the Noh masks influence the observers' recognition of the emotional expressions. In Experiment 1, neutral-faced Noh masks having the attached shadows of the happy/sad masks were recognized as bearing happy/sad expressions, respectively. This was true for all four types of masks each of which represented a character differing in sex and age, even though the original characteristics of the masks also greatly influenced the evaluation of emotions. Experiment 2 further revealed that frontal Noh mask images having shadows of upward/downward tilted masks were evaluated as sad/happy, respectively. This was consistent with outcomes from preceding studies using actually tilted Noh mask images. Results from the two experiments concur that purely manipulating attached shadows of the different types of Noh masks significantly alters the emotion recognition. These findings go in line with the mysterious facial expressions observed in Western paintings, such as the elusive qualities of Mona Lisa's smile. They also agree with the aesthetic principle of Japanese traditional art "yugen (profound grace and subtlety)", which highly appreciates subtle emotional expressions in the darkness.

  18. Shadows alter facial expressions of Noh masks.

    Directory of Open Access Journals (Sweden)

    Nobuyuki Kawai

    Full Text Available BACKGROUND: A Noh mask, worn by expert actors during performance on the Japanese traditional Noh drama, conveys various emotional expressions despite its fixed physical properties. How does the mask change its expressions? Shadows change subtly during the actual Noh drama, which plays a key role in creating elusive artistic enchantment. We here describe evidence from two experiments regarding how attached shadows of the Noh masks influence the observers' recognition of the emotional expressions. METHODOLOGY/PRINCIPAL FINDINGS: In Experiment 1, neutral-faced Noh masks having the attached shadows of the happy/sad masks were recognized as bearing happy/sad expressions, respectively. This was true for all four types of masks each of which represented a character differing in sex and age, even though the original characteristics of the masks also greatly influenced the evaluation of emotions. Experiment 2 further revealed that frontal Noh mask images having shadows of upward/downward tilted masks were evaluated as sad/happy, respectively. This was consistent with outcomes from preceding studies using actually tilted Noh mask images. CONCLUSIONS/SIGNIFICANCE: Results from the two experiments concur that purely manipulating attached shadows of the different types of Noh masks significantly alters the emotion recognition. These findings go in line with the mysterious facial expressions observed in Western paintings, such as the elusive qualities of Mona Lisa's smile. They also agree with the aesthetic principle of Japanese traditional art "yugen (profound grace and subtlety", which highly appreciates subtle emotional expressions in the darkness.

  19. Altered Kinematics of Facial Emotion Expression and Emotion Recognition Deficits Are Unrelated in Parkinson's Disease.

    Science.gov (United States)

    Bologna, Matteo; Berardelli, Isabella; Paparella, Giulia; Marsili, Luca; Ricciardi, Lucia; Fabbrini, Giovanni; Berardelli, Alfredo

    2016-01-01

    Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson's disease (PD). However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Eighteen patients with PD and 16 healthy controls were enrolled in this study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analyzed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients were evaluated using the Spearman's test and multiple regression analysis. The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all Ps Ekman global score and disgust, sadness, and fear sub-scores than healthy controls (all Ps emotion recognition deficits were unrelated in patients (all Ps > 0.05). Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients (all Ps > 0.05). The results in this study provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.

  20. Facial Expression Recognition in Nonvisual Imagery

    Science.gov (United States)

    Olague, Gustavo; Hammoud, Riad; Trujillo, Leonardo; Hernández, Benjamín; Romero, Eva

    This chapter presents two novel approaches that allow computer vision applications to perform human facial expression recognition (FER). From a prob lem standpoint, we focus on FER beyond the human visual spectrum, in long-wave infrared imagery, thus allowing us to offer illumination-independent solutions to this important human-computer interaction problem. From a methodological stand point, we introduce two different feature extraction techniques: a principal com ponent analysis-based approach with automatic feature selection and one based on texture information selected by an evolutionary algorithm. In the former, facial fea tures are selected based on interest point clusters, and classification is carried out us ing eigenfeature information; in the latter, an evolutionary-based learning algorithm searches for optimal regions of interest and texture features based on classification accuracy. Both of these approaches use a support vector machine-committee for classification. Results show effective performance for both techniques, from which we can conclude that thermal imagery contains worthwhile information for the FER problem beyond the human visual spectrum.

  1. Faces and bodies: perception and mimicry of emotionally congruent and incongruent facial and bodily expressions

    Directory of Open Access Journals (Sweden)

    Mariska eKret

    2013-02-01

    Full Text Available Traditional emotion theories stress the importance of the face in the expression of emotions but bodily expressions are becoming increasingly important. Here we tested the hypothesis that similar physiological responses can be evoked by observing emotional face and body signals and that the reaction to angry signals is amplified in anxious individuals. We designed three experiments in which participants categorized emotional expressions from isolated facial and bodily expressions and from emotionally congruent and incongruent face-body compounds. Participants’ fixations were measured and their pupil size recorded with eye-tracking equipment, and their facial reactions measured with electromyography (EMG. The behavioral results support our prediction that the recognition of a facial expression is improved in the context of a matching posture and importantly, also vice versa. From their facial expression, it appeared that observers acted with signs of negative emotionality (increased corrugator activity to angry and fearful facial expressions and with positive emotionality (increased zygomaticus to happy facial expressions. What we predicted and found, was that angry and fearful cues from the face or the body, attracted more attention than happy cues. We further observed that responses evoked by angry cues were amplified in individuals with high anxiety scores. In sum, we show that people process bodily expressions of emotion in a similar fashion as facial expressions and that the congruency between the emotional signals from the face and body ameliorates the recognition of the emotion.

  2. Younger and Older Users’ Recognition of Virtual Agent Facial Expressions

    Science.gov (United States)

    Beer, Jenay M.; Smarr, Cory-Ann; Fisk, Arthur D.; Rogers, Wendy A.

    2015-01-01

    As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent’s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell, Sullivan, Prevost, & Churchill, 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck & Reichenbach, 2005; Courgeon et al. 2009; 2011; Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent’s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a

  3. Regression-based Multi-View Facial Expression Recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja

    2010-01-01

    We present a regression-based scheme for multi-view facial expression recognition based on 2-D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a state-of-the-

  4. Regression-based Multi-View Facial Expression Recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja

    2010-01-01

    We present a regression-based scheme for multi-view facial expression recognition based on 2蚠D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a

  5. Modulation of incentivized dishonesty by disgust facial expressions

    Directory of Open Access Journals (Sweden)

    Julian eLim

    2015-07-01

    Full Text Available Disgust modulates moral decisions involving harming others. We recently specified that this effect is bi-directionally modulated by individual sensitivity to disgust. Here, we show that this effect generalizes to the moral domain of honesty and extends to outcomes with real-world impact. We employed a dice-rolling task in which participants were incentivized to dishonestly report outcomes to increase their potential final monetary payoff. Disgust or control facial expressions were presented subliminally on each trial. Our results reveal that the disgust facial expressions altered honest reporting as a bi-directional function moderated by individual sensitivity. Combining these data with those from prior experiments revealed that the effect of disgust presentation on both harm judgments and honesty could be accounted for by the same bidirectional function, with no significant effect of domain. This clearly demonstrates that disgust facial expressions produce the same modulation of moral judgments across different moral foundations (harm and honesty. Our results suggest strong overlap in the cognitive/neural processes of moral judgments across moral foundations, and provide a framework for further studies to specify the integration of emotional information in moral decision making.

  6. Serotonin transporter gene-linked polymorphism affects detection of facial expressions.

    Directory of Open Access Journals (Sweden)

    Ai Koizumi

    Full Text Available Previous studies have demonstrated that the serotonin transporter gene-linked polymorphic region (5-HTTLPR affects the recognition of facial expressions and attention to them. However, the relationship between 5-HTTLPR and the perceptual detection of others' facial expressions, the process which takes place prior to emotional labeling (i.e., recognition, is not clear. To examine whether the perceptual detection of emotional facial expressions is influenced by the allelic variation (short/long of 5-HTTLPR, happy and sad facial expressions were presented at weak and mid intensities (25% and 50%. Ninety-eight participants, genotyped for 5-HTTLPR, judged whether emotion in images of faces was present. Participants with short alleles showed higher sensitivity (d' to happy than to sad expressions, while participants with long allele(s showed no such positivity advantage. This effect of 5-HTTLPR was found at different facial expression intensities among males and females. The results suggest that at the perceptual stage, a short allele enhances the processing of positive facial expressions rather than that of negative facial expressions.

  7. Extraction of Eyes for Facial Expression Identification of Students

    Directory of Open Access Journals (Sweden)

    G.Sofia,

    2010-07-01

    Full Text Available Facial expressions play an essential role in communications in social interactions with other human beings which deliver rich information about their emotions. Facial expression analysis has wide range ofapplications in the areas such as Psychology, Animations, Interactive games, Image retrieval and Image understanding. Selecting the relevant feature and ignoring the unimportant feature is the key step in facial expression recognition system. Here, we propose an efficient method for identifying the expressions of the students torecognize their comprehension from the facial expressions in static images containing the frontal view of the human face. Our goal is to categorize the facial expressions of the students in the given image into two basic emotional expression states – comprehensible, incomprehensible. One of the key action units in the face to expose expression is eye. In this paper, Facial expressions are identified from the expressions of the eyes. Our method consists of three steps, Edge detection, Eye extraction and Emotion recognition. Edge detection is performed through Prewitt operator. Extraction of eyes is performed using iterative search algorithm on the edge image. All the extracted information are combined together to form the feature vector. Finally, the features are given as an input for a BPN classifier and thus the facial expressions are being identified. The proposed method is tested on the Yale Face database.

  8. Meta-Analysis of the First Facial Expression Recognition Challenge.

    Science.gov (United States)

    Valstar, M F; Mehu, M; Bihan Jiang; Pantic, M; Scherer, K

    2012-08-01

    Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability have received some attention; for instance, there exist a number of commonly used facial expression databases. However, lack of a commonly accepted evaluation protocol and, typically, lack of sufficient details needed to reproduce the reported individual results make it difficult to compare systems. This, in turn, hinders the progress of the field. A periodical challenge in facial expression recognition would allow such a comparison on a level playing field. It would provide an insight on how far the field has come and would allow researchers to identify new goals, challenges, and targets. This paper presents a meta-analysis of the first such challenge in automatic recognition of facial expressions, held during the IEEE conference on Face and Gesture Recognition 2011. It details the challenge data, evaluation protocol, and the results attained in two subchallenges: AU detection and classification of facial expression imagery in terms of a number of discrete emotion categories. We also summarize the lessons learned and reflect on the future of the field of facial expression recognition in general and on possible future challenges in particular.

  9. Facial Expression Recognition Using Stationary Wavelet Transform Features

    Directory of Open Access Journals (Sweden)

    Huma Qayyum

    2017-01-01

    Full Text Available Humans use facial expressions to convey personal feelings. Facial expressions need to be automatically recognized to design control and interactive applications. Feature extraction in an accurate manner is one of the key steps in automatic facial expression recognition system. Current frequency domain facial expression recognition systems have not fully utilized the facial elements and muscle movements for recognition. In this paper, stationary wavelet transform is used to extract features for facial expression recognition due to its good localization characteristics, in both spectral and spatial domains. More specifically a combination of horizontal and vertical subbands of stationary wavelet transform is used as these subbands contain muscle movement information for majority of the facial expressions. Feature dimensionality is further reduced by applying discrete cosine transform on these subbands. The selected features are then passed into feed forward neural network that is trained through back propagation algorithm. An average recognition rate of 98.83% and 96.61% is achieved for JAFFE and CK+ dataset, respectively. An accuracy of 94.28% is achieved for MS-Kinect dataset that is locally recorded. It has been observed that the proposed technique is very promising for facial expression recognition when compared to other state-of-the-art techniques.

  10. Dielectric elastomer actuators for facial expression

    Science.gov (United States)

    Wang, Yuzhe; Zhu, Jian

    2016-04-01

    Dielectric elastomer actuators have the advantage of mimicking the salient feature of life: movements in response to stimuli. In this paper we explore application of dielectric elastomer actuators to artificial muscles. These artificial muscles can mimic natural masseter to control jaw movements, which are key components in facial expressions especially during talking and singing activities. This paper investigates optimal design of the dielectric elastomer actuator. It is found that the actuator with embedded plastic fibers can avert electromechanical instability and can greatly improve its actuation. Two actuators are then installed in a robotic skull to drive jaw movements, mimicking the masseters in a human jaw. Experiments show that the maximum vertical displacement of the robotic jaw, driven by artificial muscles, is comparable to that of the natural human jaw during speech activities. Theoretical simulations are conducted to analyze the performance of the actuator, which is quantitatively consistent with the experimental observations.

  11. The Not Face: A grammaticalization of facial expressions of emotion

    Science.gov (United States)

    Benitez-Quiroz, C. Fabian; Wilbur, Ronnie B.; Martinez, Aleix M.

    2016-01-01

    Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3–8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers. PMID:26872248

  12. Automatic facial feature extraction and expression recognition based on neural network

    CERN Document Server

    Khandait, S P; Khandait, P D

    2012-01-01

    In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image processing operations are used. Permanent facial features like eyebrows, eyes, mouth and nose are extracted using SUSAN edge detection operator, facial geometry, edge projection analysis. Experiments are carried out on JAFFE facial expression database and gives better performance in terms of 100% accuracy for training set and 95.26% accuracy for test set.

  13. Shadows Alter Facial Expressions of Noh Masks

    Science.gov (United States)

    Kawai, Nobuyuki; Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo

    2013-01-01

    Background A Noh mask, worn by expert actors during performance on the Japanese traditional Noh drama, conveys various emotional expressions despite its fixed physical properties. How does the mask change its expressions? Shadows change subtly during the actual Noh drama, which plays a key role in creating elusive artistic enchantment. We here describe evidence from two experiments regarding how attached shadows of the Noh masks influence the observers’ recognition of the emotional expressions. Methodology/Principal Findings In Experiment 1, neutral-faced Noh masks having the attached shadows of the happy/sad masks were recognized as bearing happy/sad expressions, respectively. This was true for all four types of masks each of which represented a character differing in sex and age, even though the original characteristics of the masks also greatly influenced the evaluation of emotions. Experiment 2 further revealed that frontal Noh mask images having shadows of upward/downward tilted masks were evaluated as sad/happy, respectively. This was consistent with outcomes from preceding studies using actually tilted Noh mask images. Conclusions/Significance Results from the two experiments concur that purely manipulating attached shadows of the different types of Noh masks significantly alters the emotion recognition. These findings go in line with the mysterious facial expressions observed in Western paintings, such as the elusive qualities of Mona Lisa’s smile. They also agree with the aesthetic principle of Japanese traditional art “yugen (profound grace and subtlety)”, which highly appreciates subtle emotional expressions in the darkness. PMID:23940748

  14. Colour Perception on Facial Expression towards Emotion

    Directory of Open Access Journals (Sweden)

    Rubita Sudirman

    2012-12-01

    Full Text Available This study is to investigate human perceptions on pairing of facial expressions of emotion with colours. A group of 27 subjects consisting mainly of younger and Malaysian had participated in this study. For each of the seven faces, which expresses the basic emotions neutral, happiness, surprise, anger, disgust, fear and sadness, a single colour is chosen from the eight basic colours for the match of best visual look to the face accordingly. The different emotions appear well characterized by a single colour. The approaches used in this experiment for analysis are psychology disciplines and colours engineering. These seven emotions are being matched by the subjects with their perceptions and feeling. Then, 12 male and 12 female data are randomly chosen from among the previous data to make a colour perception comparison between genders. The successes or failures in running of this test depend on the possibility of subjects to propose their every single colour for each expression. The result will translate into number and percentage as a guide for colours designers and psychology field.

  15. [Association between intelligence development and facial expression recognition ability in children with autism spectrum disorder].

    Science.gov (United States)

    Pan, Ning; Wu, Gui-Hua; Zhang, Ling; Zhao, Ya-Fen; Guan, Han; Xu, Cai-Juan; Jing, Jin; Jin, Yu

    2017-03-01

    To investigate the features of intelligence development, facial expression recognition ability, and the association between them in children with autism spectrum disorder (ASD). A total of 27 ASD children aged 6-16 years (ASD group, full intelligence quotient >70) and age- and gender-matched normally developed children (control group) were enrolled. Wechsler Intelligence Scale for Children Fourth Edition and Chinese Static Facial Expression Photos were used for intelligence evaluation and facial expression recognition test. Compared with the control group, the ASD group had significantly lower scores of full intelligence quotient, verbal comprehension index, perceptual reasoning index (PRI), processing speed index(PSI), and working memory index (WMI) (Pchildren have delayed intelligence development compared with normally developed children and impaired expression recognition ability. Perceptual reasoning and working memory abilities are positively correlated with expression recognition ability, which suggests that insufficient perceptual reasoning and working memory abilities may be important factors affecting facial expression recognition ability in ASD children.

  16. Facial expression recognition based on improved deep belief networks

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.

  17. Cognitive tasks during expectation affect the congruency ERP effects to facial expressions.

    Science.gov (United States)

    Lin, Huiyan; Schulz, Claudia; Straube, Thomas

    2015-01-01

    Expectancy congruency has been shown to modulate event-related potentials (ERPs) to emotional stimuli, such as facial expressions. However, it is unknown whether the congruency ERP effects to facial expressions can be modulated by cognitive manipulations during stimulus expectation. To this end, electroencephalography (EEG) was recorded while participants viewed (neutral and fearful) facial expressions. Each trial started with a cue, predicting a facial expression, followed by an expectancy interval without any cues and subsequently the face. In half of the trials, participants had to solve a cognitive task in which different letters were presented for target letter detection during the expectancy interval. Furthermore, facial expressions were congruent with the cues in 75% of all trials. ERP results revealed that for fearful faces, the cognitive task during expectation altered the congruency effect in N170 amplitude; congruent compared to incongruent fearful faces evoked larger N170 in the non-task condition but the congruency effect was not evident in the task condition. Regardless of facial expression, the congruency effect was generally altered by the cognitive task during expectation in P3 amplitude; the amplitudes were larger for incongruent compared to congruent faces in the non-task condition but the congruency effect was not shown in the task condition. The findings indicate that cognitive tasks during expectation reduce the processing of expectation and subsequently, alter congruency ERP effects to facial expressions.

  18. Recognition of Facial Expressions in Individuals with Elevated Levels of Depressive Symptoms: An Eye-Movement Study

    OpenAIRE

    2012-01-01

    Previous studies consistently reported abnormal recognition of facial expressions in depression. However, it is still not clear whether this abnormality is due to an enhanced or impaired ability to recognize facial expressions, and what underlying cognitive systems are involved. The present study aimed to examine how individuals with elevated levels of depressive symptoms differ from controls on facial expression recognition and to assess attention and information processing using eye trackin...

  19. Processing method of facial expression images under partial occlusion%一种处理部分遮挡表情图像的方法

    Institute of Scientific and Technical Information of China (English)

    张建明; 张晓翠

    2011-01-01

    集,2007.[9]Wang Q M,Ju S G.A mixed classifier based on combination of HMM and KNN[C]//Proceedings of the 4th International Conference on Natural Computation,2008:38-42.[10]Lee J J,Uddin M Z,Kim T S.Spatiotemporal human facial expression recognition using fisher independent component analysis and hidden Markov model[C]//Proceedings of the 30th Annual International Conference on the IEEE Engineering in Medicine and Biology Society,Vancouver,2008.

  20. Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems

    Directory of Open Access Journals (Sweden)

    Muhammad Hameed Siddiqi

    2013-12-01

    Full Text Available Over the last decade, human facial expressions recognition (FER has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER.

  1. Understanding facial expressions of pain in patients with depression

    NARCIS (Netherlands)

    Lautenbacher, Stefan; Bär, Karl-Juergen; Eisold, Patricia; Kunz, Miriam

    2016-01-01

    Although depression is associated with more clinical pain complaints, psychophysical data sometimes point to hypoalgesic alterations. Studying the more reflex-like facial expression of pain in patients with depression may offer a new perspective. Facial and psychophysical responses to non-painful an

  2. Exogenous cortisol shifts a motivated bias from fear to anger in spatial working memory for facial expressions.

    NARCIS (Netherlands)

    Putman, P.; Hermans, E.J.; Honk, J. van

    2007-01-01

    Studies assessing processing of facial expressions have established that cortisol levels, emotional traits, and affective disorders predict selective responding to these motivationally relevant stimuli in expression specific manners. For instance, increased attentional processing of fearful faces

  3. Frame-Based Facial Expression Recognition Using Geometrical Features

    Directory of Open Access Journals (Sweden)

    Anwar Saeed

    2014-01-01

    Full Text Available To improve the human-computer interaction (HCI to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness, with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.

  4. Processing of Facial Emotion in Bipolar Depression and Euthymia.

    Science.gov (United States)

    Robinson, Lucy J; Gray, John M; Burt, Mike; Ferrier, I Nicol; Gallagher, Peter

    2015-10-01

    Previous studies of facial emotion processing in bipolar disorder (BD) have reported conflicting findings. In independently conducted studies, we investigate facial emotion labeling in euthymic and depressed BD patients using tasks with static and dynamically morphed images of different emotions displayed at different intensities. Study 1 included 38 euthymic BD patients and 28 controls. Participants completed two tasks: labeling of static images of basic facial emotions (anger, disgust, fear, happy, sad) shown at different expression intensities; the Eyes Test (Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001), which involves recognition of complex emotions using only the eye region of the face. Study 2 included 53 depressed BD patients and 47 controls. Participants completed two tasks: labeling of "dynamic" facial expressions of the same five basic emotions; the Emotional Hexagon test (Young, Perret, Calder, Sprengelmeyer, & Ekman, 2002). There were no significant group differences on any measures of emotion perception/labeling, compared to controls. A significant group by intensity interaction was observed in both emotion labeling tasks (euthymia and depression), although this effect did not survive the addition of measures of executive function/psychomotor speed as covariates. Only 2.6-15.8% of euthymic patients and 7.8-13.7% of depressed patients scored below the 10th percentile of the controls for total emotion recognition accuracy. There was no evidence of specific deficits in facial emotion labeling in euthymic or depressed BD patients. Methodological variations-including mood state, sample size, and the cognitive demands of the tasks-may contribute significantly to the variability in findings between studies.

  5. Recognition of Facial Expressions in Individuals with Elevated Levels of Depressive Symptoms: An Eye-Movement Study

    Directory of Open Access Journals (Sweden)

    Lingdan Wu

    2012-01-01

    Full Text Available Previous studies consistently reported abnormal recognition of facial expressions in depression. However, it is still not clear whether this abnormality is due to an enhanced or impaired ability to recognize facial expressions, and what underlying cognitive systems are involved. The present study aimed to examine how individuals with elevated levels of depressive symptoms differ from controls on facial expression recognition and to assess attention and information processing using eye tracking. Forty participants (18 with elevated depressive symptoms were instructed to label facial expressions depicting one of seven emotions. Results showed that the high-depression group, in comparison with the low-depression group, recognized facial expressions faster and with comparable accuracy. Furthermore, the high-depression group demonstrated greater leftwards attention bias which has been argued to be an indicator of hyperactivation of right hemisphere during facial expression recognition.

  6. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    Science.gov (United States)

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc.

  7. A New Method of 3D Facial Expression Animation

    Directory of Open Access Journals (Sweden)

    Shuo Sun

    2014-01-01

    Full Text Available Animating expressive facial animation is a very challenging topic within the graphics community. In this paper, we introduce a novel ERI (expression ratio image driving framework based on SVR and MPEG-4 for automatic 3D facial expression animation. Through using the method of support vector regression (SVR, the framework can learn and forecast the regression relationship between the facial animation parameters (FAPs and the parameters of expression ratio image. Firstly, we build a 3D face animation system driven by FAP. Secondly, through using the method of principle component analysis (PCA, we generate the parameter sets of eigen-ERI space, which will rebuild reasonable expression ratio image. Then we learn a model with the support vector regression mapping, and facial animation parameters can be synthesized quickly with the parameters of eigen-ERI. Finally, we implement our 3D face animation system driving by the result of FAP and it works effectively.

  8. Automated and objective action coding of facial expressions in patients with acute facial palsy.

    Science.gov (United States)

    Haase, Daniel; Minnigerode, Laura; Volk, Gerd Fabian; Denzler, Joachim; Guntinas-Lichius, Orlando

    2015-05-01

    Aim of the present observational single center study was to objectively assess facial function in patients with idiopathic facial palsy with a new computer-based system that automatically recognizes action units (AUs) defined by the Facial Action Coding System (FACS). Still photographs using posed facial expressions of 28 healthy subjects and of 299 patients with acute facial palsy were automatically analyzed for bilateral AU expression profiles. All palsies were graded with the House-Brackmann (HB) grading system and with the Stennert Index (SI). Changes of the AU profiles during follow-up were analyzed for 77 patients. The initial HB grading of all patients was 3.3 ± 1.2. SI at rest was 1.86 ± 1.3 and during motion 3.79 ± 4.3. Healthy subjects showed a significant AU asymmetry score of 21 ± 11 % and there was no significant difference to patients (p = 0.128). At initial examination of patients, the number of activated AUs was significantly lower on the paralyzed side than on the healthy side (p facial grading is worthwhile: automated FACS delivers fast and objective global and regional data on facial motor function for use in clinical routine and clinical trials.

  9. Electrophysiological correlates of the efficient detection of emotional facial expressions.

    Science.gov (United States)

    Sawada, Reiko; Sato, Wataru; Uono, Shota; Kochiyama, Takanori; Toichi, Motomi

    2014-04-29

    Behavioral studies have shown that emotional facial expressions are detected more rapidly and accurately than are neutral expressions. However, the neural mechanism underlying this efficient detection has remained unclear. To investigate this mechanism, we measured event-related potentials (ERPs) during a visual search task in which participants detected the normal emotional facial expressions of anger and happiness or their control stimuli, termed "anti-expressions," within crowds of neutral expressions. The anti-expressions, which were created using a morphing technique that produced changes equivalent to those in the normal emotional facial expressions compared with the neutral facial expressions, were most frequently recognized as emotionally neutral. Behaviorally, normal expressions were detected faster and more accurately and were rated as more emotionally arousing than were the anti-expressions. Regarding ERPs, the normal expressions elicited larger early posterior negativity (EPN) at 200-400ms compared with anti-expressions. Furthermore, larger EPN was related to faster and more accurate detection and higher emotional arousal. These data suggest that the efficient detection of emotional facial expressions is implemented via enhanced activation of the posterior visual cortices at 200-400ms based on their emotional significance. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Fear modulates visual awareness similarly for facial and bodily expressions

    Directory of Open Access Journals (Sweden)

    Bernard M.C. Stienen

    2011-11-01

    Full Text Available BackgroundSocial interaction depends on a multitude of signals carrying information about the emotional state of others. Past research has focused on the perception of facial expressions while perception of whole body signals has only been studied recently. The relative importance of facial and bodily signals is still poorly understood. In order to better understand the relative contribution of affective signals from the face only or from the rest of the body we used a binocular rivalry experiment. This method seems to be perfectly suitable to contrast two classes of stimuli to test our processing sensitivity to either stimulus and to address the question how emotion modulates this sensitivity. We report in this paper two behavioral experiments addressing these questions.MethodIn the first experiment we directly contrasted fearful, angry and neutral bodies and faces. We always presented bodies in one eye and faces in the other simultaneously for 60 seconds and asked participants to report what they perceived. In the second experiment we focused specifically on the role of fearful expressions of faces and bodies.ResultsTaken together the two experiments show that there is no clear bias towards either the face or body when the expression of the body and face are neutral or angry. However, the perceptual dominance in favor of either the face of the body is a function of the stimulus class expressing fear.

  11. Automatic change detection to facial expressions in adolescents

    DEFF Research Database (Denmark)

    Liu, Tongran; Xiao, Tong; Jiannong, Shi

    2016-01-01

    Adolescence is a critical period for the neurodevelopment of social-emotional processing, wherein the automatic detection of changes in facial expressions is crucial for the development of interpersonal communication. Two groups of participants (an adolescent group and an adult group) were...... recruited to complete an emotional oddball task featuring on happy and one fearful condition. The measurement of event-related potential was carried out via electroencephalography and electrooculography recording, to detect visual mismatch negativity (vMMN) with regard to the automatic detection of changes...... automatic processing on fearful faces than happy faces. The present study indicated that adolescent’s posses stronger automatic detection of changes in emotional expression relative to adults, and sheds light on the neurodevelopment of automatic processes concerning social-emotional information....

  12. Rapid influence of emotional scenes on encoding of facial expressions: an ERP study.

    Science.gov (United States)

    Righart, Ruthger; de Gelder, Beatrice

    2008-09-01

    In daily life, we perceive a person's facial reaction as part of the natural environment surrounding it. Because most studies have investigated how facial expressions are recognized by using isolated faces, it is unclear what role the context plays. Although it has been observed that the N170 for facial expressions is modulated by the emotional context, it was not clear whether individuals use context information on this stage of processing to discriminate between facial expressions. The aim of the present study was to investigate how the early stages of face processing are affected by emotional scenes when explicit categorizations of fearful and happy facial expressions are made. Emotion effects were found for the N170, with larger amplitudes for faces in fearful scenes as compared to faces in happy and neutral scenes. Critically, N170 amplitudes were significantly increased for fearful faces in fearful scenes as compared to fearful faces in happy scenes and expressed in left-occipito-temporal scalp topography differences. Our results show that the information provided by the facial expression is combined with the scene context during the early stages of face processing.

  13. Rapid influence of emotional scenes on encoding of facial expressions: an ERP study

    Science.gov (United States)

    Righart, Ruthger

    2008-01-01

    In daily life, we perceive a person's facial reaction as part of the natural environment surrounding it. Because most studies have investigated how facial expressions are recognized by using isolated faces, it is unclear what role the context plays. Although it has been observed that the N170 for facial expressions is modulated by the emotional context, it was not clear whether individuals use context information on this stage of processing to discriminate between facial expressions. The aim of the present study was to investigate how the early stages of face processing are affected by emotional scenes when explicit categorizations of fearful and happy facial expressions are made. Emotion effects were found for the N170, with larger amplitudes for faces in fearful scenes as compared to faces in happy and neutral scenes. Critically, N170 amplitudes were significantly increased for fearful faces in fearful scenes as compared to fearful faces in happy scenes and expressed in left-occipito-temporal scalp topography differences. Our results show that the information provided by the facial expression is combined with the scene context during the early stages of face processing. PMID:19015119

  14. Discriminability effect on Garner interference: evidence from recognition of facial identity and expression

    Directory of Open Access Journals (Sweden)

    Yamin eWang

    2013-12-01

    Full Text Available Using Garner’s speeded classification task existing studies demonstrated an asymmetric interference in the recognition of facial identity and facial expression. It seems that expression is hard to interfere with identity recognition. However, discriminability of identity and expression, a potential confounding variable, had not been carefully examined in existing studies. In current work, we manipulated discriminability of identity and expression by matching facial shape (long or round in identity and matching mouth (opened or closed in facial expression. Garner interference was found either from identity to expression (Experiment 1 or from expression to identity (Experiment 2. Interference was also found in both directions (Experiment 3 or in neither direction (Experiment 4. The results support that Garner interference tends to occur under condition of low discriminability of relevant dimension regardless of facial property. Our findings indicate that Garner interference is not necessarily related to interdependent processing in recognition of facial identity and expression. The findings also suggest that discriminability as a mediating factor should be carefully controlled in future research.

  15. Web-based Visualisation of Head Pose and Facial Expressions Changes:

    DEFF Research Database (Denmark)

    Kalliatakis, Grigorios; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2016-01-01

    Despite significant recent advances in the field of head pose estimation and facial expression recognition, raising the cognitive level when analysing human activity presents serious challenges to current concepts. Motivated by the need of generating comprehensible visual representations from...... different sets of data, we introduce a system capable of monitoring human activity through head pose and facial expression changes, utilising an affordable 3D sensing technology (Microsoft Kinect sensor). An approach build on discriminative random regression forests was selected in order to rapidly...... and accurately estimate head pose changes in unconstrained environment. In order to complete the secondary process of recognising four universal dominant facial expressions (happiness, anger, sadness and surprise), emotion recognition via facial expressions (ERFE) was adopted. After that, a lightweight data...

  16. Web-based Visualisation of Head Pose and Facial Expressions Changes:

    DEFF Research Database (Denmark)

    Kalliatakis, Grigorios; Vidakis, Nikolaos; Triantafyllidis, Georgios

    Despite significant recent advances in the field of head pose estimation and facial expression recognition, raising the cognitive level when analysing human activity presents serious challenges to current concepts. Motivated by the need of generating comprehensible visual representations from...... and accurately estimate head pose changes in unconstrained environment. In order to complete the secondary process of recognising four universal dominant facial expressions (happiness, anger, sadness and surprise), emotion recognition via facial expressions (ERFE) was adopted. After that, a lightweight data...... different sets of data, we introduce a system capable of monitoring human activity through head pose and facial expression changes, utilising an affordable 3D sensing technology (Microsoft Kinect sensor). An approach build on discriminative random regression forests was selected in order to rapidly...

  17. Facial Expression Recognition of Various Internal States via Manifold Learning

    Institute of Scientific and Technical Information of China (English)

    Young-Suk Shin

    2009-01-01

    Emotions are becoming increasingly important in human-centered interaction architectures. Recognition of facial expressions, which are central to human-computer interactions, seems natural and desirable. However, facial expressions include mixed emotions, continuous rather than discrete, which vary from moment to moment. This paper represents a novel method of recognizing facial expressions of various internal states via manifold learning, to achieve the aim of humancentered interaction studies. A critical review of widely used emotion models is described, then, facial expression features of various internal states via the locally linear embedding (LLE) are extracted. The recognition of facial expressions is created with the pleasure-displeasure and arousal-sleep dimensions in a two-dimensional model of emotion. The recognition result of various internal state expressions that mapped to the embedding space via the LLE algorithm can effectively represent the structural nature of the two-dimensional model of emotion. Therefore our research has established that the relationship between facial expressions of various internal states can be elaborated in the two-dimensional model of emotion, via the locally linear embedding algorithm.

  18. French-speaking Children's Freely Produced Labels for Facial Expressions

    Directory of Open Access Journals (Sweden)

    Reem eMaassarani

    2014-06-01

    Full Text Available In this study, we investigated the labeling of facial expressions in French-speaking children. The participants were 137 French-speaking children, between the ages of 5 and 11 years, recruited from three elementary schools in Ottawa, Ontario, Canada. The facial expressions included expressions of happiness, sadness, fear, surprise, anger, and disgust. Participants were shown one facial expression at a time, and asked to say what the stimulus person was feeling. Participants’ responses were coded by two raters who made judgments concerning the specific emotion category in which the responses belonged. Five- and 6-year-olds were quite accurate in labeling facial expressions of happiness, anger, and sadness but far less accurate for facial expressions of fear, surprise, and disgust. An improvement in accuracy as a function of age was found for fear and surprise only. Labeling facial expressions of disgust proved to be very difficult for the children, even for the 11-year-olds. In order to examine the fit between the model proposed by Widen and Russell (2003 and our data, we looked at the number of participants who had the predicted response patterns. Overall, 88.52% of the participants did. Most of the participants used between 3 and 5 labels, with correspondence percentages varying between 80.00% and 100.00%. Our results suggest that the model proposed by Widen and Russell is not limited to English-speaking children, but also accounts for the sequence of emotion labeling in French-Canadian children.

  19. How facial expressions of emotion affect distance perception

    Directory of Open Access Journals (Sweden)

    Nam-Gyoon eKim

    2015-11-01

    Full Text Available Facial expressions of emotion are thought to convey expressers’ behavioral intentions, thus priming observers’ approach and avoidance tendencies appropriately. The present study examined whether detecting expressions of behavioral intent influence perceivers’ estimation of the expresser’s distance from them. Eighteen undergraduates (9 male and 9 female participated in the study. Six facial expressions were chosen on the basis of degree of threat—anger, hate (threatening expressions, shame, surprise (neutral expressions, pleasure and joy (safe expressions. Each facial expression was presented on a tablet PC held by an assistant covered by a black drape who stood 1m, 2m, or 3m away from participants. Participants performed a visual matching task to report the perceived distance. Results showed that facial expression influenced distance estimation, with faces exhibiting threatening or safe expressions judged closer than those showing neutral expressions. Females’ judgments were more likely to be influenced; but these influences largely disappeared beyond the 2m distance. These results suggest that facial expressions of emotion (particularly threatening or safe emotions influence others’ (especially females’ distance estimations but only within close proximity.

  20. Children's Representations of Facial Expression and Identity: Identity-Contingent Expression Aftereffects

    Science.gov (United States)

    Vida, Mark D.; Mondloch, Catherine J.

    2009-01-01

    This investigation used adaptation aftereffects to examine developmental changes in the perception of facial expressions. Previous studies have shown that adults' perceptions of ambiguous facial expressions are biased following adaptation to intense expressions. These expression aftereffects are strong when the adapting and probe expressions share…

  1. Facial Emotion and Identity Processing Development in 5- to 15-Year-Old Children

    OpenAIRE

    Johnston, Patrick J.; Kaufman, Jordy; Bajic, Julie; Sercombe, Alicia; Michie, Patricia T.; Karayanidis, Frini

    2011-01-01

    Most developmental studies of emotional face processing to date have focused on infants and very young children. Additionally, studies that examine emotional face processing in older children do not distinguish development in emotion and identity face processing from more generic age-related cognitive improvement. In this study, we developed a paradigm that measures processing of facial expression in comparison to facial identity and complex visual stimuli. The three matching tasks were devel...

  2. Facial Emotion and Identity Processing Development in 5- to 15-Year-Old Children

    OpenAIRE

    Patrick eJohnston; Jordy eKaufman; Julie eBajic; Alicia eSercombe; Patricia eMichie; Frini eKarayanidis

    2011-01-01

    Most developmental studies of emotional face processing to date have focussed on infants and very young children. Additionally, studies that examine emotional face processing in older children do not distinguish development in emotion and identity face processing from more generic age-related cognitive improvement. In this study, we developed a paradigm that measures processing of facial expression in comparison to facial identity and complex visual stimuli. The three matching tasks were de...

  3. Importance of the brow in facial expressiveness during human communication.

    Science.gov (United States)

    Neely, John Gail; Lisker, Paul; Drapekin, Jesse

    2014-03-01

    The objective of this study was to evaluate laterality and upper/lower face dominance of expressiveness during prescribed speech using a unique validated image subtraction system capable of sensitive and reliable measurement of facial surface deformation. Observations and experiments of central control of facial expressions during speech and social utterances in humans and animals suggest that the right mouth moves more than the left during nonemotional speech. However, proficient lip readers seem to attend to the whole face to interpret meaning from expressed facial cues, also implicating a horizontal (upper face-lower face) axis. Prospective experimental design. Experimental maneuver: recited speech. image-subtraction strength-duration curve amplitude. Thirty normal human adults were evaluated during memorized nonemotional recitation of 2 short sentences. Facial movements were assessed using a video-image subtractions system capable of simultaneously measuring upper and lower specific areas of each hemiface. The results demonstrate both axes influence facial expressiveness in human communication; however, the horizontal axis (upper versus lower face) would appear dominant, especially during what would appear to be spontaneous breakthrough unplanned expressiveness. These data are congruent with the concept that the left cerebral hemisphere has control over nonemotionally stimulated speech; however, the multisynaptic brainstem extrapyramidal pathways may override hemiface laterality and preferentially take control of the upper face. Additionally, these data demonstrate the importance of the often-ignored brow in facial expressiveness. Experimental study. EBM levels not applicable.

  4. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns.

    Directory of Open Access Journals (Sweden)

    Sanni Somppi

    Full Text Available Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth. We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral. We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel

  5. Dogs Evaluate Threatening Facial Expressions by Their Biological Validity--Evidence from Gazing Patterns.

    Science.gov (United States)

    Somppi, Sanni; Törnqvist, Heini; Kujala, Miiamaaria V; Hänninen, Laura; Krause, Christina M; Vainio, Outi

    2016-01-01

    Appropriate response to companions' emotional signals is important for all social creatures. The emotional expressions of humans and non-human animals have analogies in their form and function, suggesting shared evolutionary roots, but very little is known about how animals other than primates view and process facial expressions. In primates, threat-related facial expressions evoke exceptional viewing patterns compared with neutral or positive stimuli. Here, we explore if domestic dogs (Canis familiaris) have such an attentional bias toward threatening social stimuli and whether observed emotional expressions affect dogs' gaze fixation distribution among the facial features (eyes, midface and mouth). We recorded the voluntary eye gaze of 31 domestic dogs during viewing of facial photographs of humans and dogs with three emotional expressions (threatening, pleasant and neutral). We found that dogs' gaze fixations spread systematically among facial features. The distribution of fixations was altered by the seen expression, but eyes were the most probable targets of the first fixations and gathered longer looking durations than mouth regardless of the viewed expression. The examination of the inner facial features as a whole revealed more pronounced scanning differences among expressions. This suggests that dogs do not base their perception of facial expressions on the viewing of single structures, but the interpretation of the composition formed by eyes, midface and mouth. Dogs evaluated social threat rapidly and this evaluation led to attentional bias, which was dependent on the depicted species: threatening conspecifics' faces evoked heightened attention but threatening human faces instead an avoidance response. We propose that threatening signals carrying differential biological validity are processed via distinctive neurocognitive pathways. Both of these mechanisms may have an adaptive significance for domestic dogs. The findings provide a novel perspective on

  6. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia.

    Science.gov (United States)

    Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.

  7. Behavioural dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia

    Directory of Open Access Journals (Sweden)

    Roberta eDaini

    2014-12-01

    Full Text Available Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP, and basic emotional expressions. To this end, we carried out a behavioural study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm, could be identical (neutral to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs’ impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non emotional facial expression (task 1. Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2. These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition

  8. Recognizing dynamic facial expressions of emotion: Specificity and intensity effects in event-related brain potentials.

    Science.gov (United States)

    Recio, Guillermo; Schacht, Annekathrin; Sommer, Werner

    2014-02-01

    Emotional facial expressions usually arise dynamically from a neutral expression. Yet, most previous research focused on static images. The present study investigated basic aspects of processing dynamic facial expressions. In two experiments, we presented short videos of facial expressions of six basic emotions and non-emotional facial movements emerging at variable and fixed rise times, attaining different intensity levels. In event-related brain potentials (ERP), effects of emotion but also for non-emotional movements appeared as early posterior negativity (EPN) between 200 and 350ms, suggesting an overall facilitation of early visual encoding for all facial movements. These EPN effects were emotion-unspecific. In contrast, relative to happiness and neutral expressions, negative emotional expressions elicited larger late positive ERP components (LPCs), indicating a more elaborate processing. Both EPN and LPC amplitudes increased with expression intensity. Effects of emotion and intensity were additive, indicating that intensity (understood as the degree of motion) increases the impact of emotional expressions but not its quality. These processes can be driven by all basic emotions, and there is little emotion-specificity even when statistical power is considerable (N (Experiment 2)=102). Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Altered prosaposin expression in the rat facial nerve nucleus following facial nerve transection and repair

    Institute of Scientific and Technical Information of China (English)

    Dong Wang; Wenlong Luo; Cuiying Zhou; Jingjing Li

    2009-01-01

    BACKGROUND: Studies have demonstrated that damaged facial nerves synthesize prosaposin to promote repair of facial neurons.OBJECTIVE: To observe time-course changes of prosaposin expression in the facial nerve nucleus of Sprague Dawley rats following facial nerve transection and repair.DESIGN, TIME AND SETTING: A randomized control neuropathological animal experiment was performed in Chongqing Medical University between March 2007 and September 2008.MATERIALS: A total of 48 adult, male, Sprague Dawley rats were selected and randomly divided into transection and transection + end-to-end anastomosis groups (n =24). Rabbit anti-rat prosaposin antibody, instant SABC immunohistochemical kit, and antibody dilution solution were purchased from Wuhan Uscn Science Co., Ltd., China.METHODS: In the transection group, the nerve trunk of the distal retroauricular branch of the left facial nerves was ligated in Sprague Dawley rats, and a 5-mm nerve trunk at the distal end of the ligation site was removed. In the transection + end-to-end anastomosis group, epineurial anastomosis was performed immediately following transection of the left facial nerves. The right facial nerves in the two groups sewed as the normal control group.MAIN OUTCOME MEASURES: The number of prosaposin-positive neurons, as well as intensity of immunostaining in facial nerve nucleus, following transection and end-to-end anastomosis were determined by immunohistochemistry at 1,3, 7, 14, 21, and 35 days after injury.RESULTS: Transection group: transection of facial nerves resulted in increased number of prosaposin-positive neurons and immunoreactivity intensity in the facial nucleus on day 1. These values significantly increased by day 3. Expression was greater than in the control side. The peak of the reduction was reached at 7 days post-surgery. Transection + end-to-end anastomosis group: the number of prosaposin-positive neurons and immunoreactivity intensity was reduced in the facial nerve nucleus following

  10. What do facial expressions of emotion express in young children? The relationship between facial display and EMG measures

    Directory of Open Access Journals (Sweden)

    Michela Balconi

    2014-04-01

    Full Text Available The present paper explored the relationship between emotional facial response and electromyographic modulation in children when they observe facial expression of emotions. Facial responsiveness (evaluated by arousal and valence ratings and psychophysiological correlates (facial electromyography, EMG were analyzed when children looked at six facial expressions of emotions (happiness, anger, fear, sadness, surprise and disgust. About EMG measure, corrugator and zygomatic muscle activity was monitored in response to different emotional types. ANOVAs showed differences for both EMG and facial response across the subjects, as a function of different emotions. Specifically, some emotions were well expressed by all the subjects (such as happiness, anger and fear in terms of high arousal, whereas some others were less level arousal (such as sadness. Zygomatic activity was increased mainly for happiness, from one hand, corrugator activity was increased mainly for anger, fear and surprise, from the other hand. More generally, EMG and facial behavior were highly correlated each other, showing a “mirror” effect with respect of the observed faces.

  11. Rapid Facial Reactions to Emotional Facial Expressions in Typically Developing Children and Children with Autism Spectrum Disorder

    Science.gov (United States)

    Beall, Paula M.; Moody, Eric J.; McIntosh, Daniel N.; Hepburn, Susan L.; Reed, Catherine L.

    2008-01-01

    Typical adults mimic facial expressions within 1000ms, but adults with autism spectrum disorder (ASD) do not. These rapid facial reactions (RFRs) are associated with the development of social-emotional abilities. Such interpersonal matching may be caused by motor mirroring or emotional responses. Using facial electromyography (EMG), this study…

  12. GENDER DIFFERENCES IN THE RECOGNITION OF FACIAL EXPRESSIONS OF EMOTION

    Directory of Open Access Journals (Sweden)

    CARLOS FELIPE PARDO-VÉLEZ

    2003-07-01

    Full Text Available Gender differences in the recognition of facial expressions of anger, happiness and sadness wereresearched in students 18-25 years of age. A reaction time procedure was used, and the percentage ofcorrect answers when recognizing was also measured. Though the work hypothesis expected genderdifferences in facial expression recognition, results suggest that these differences are not significant at alevel of 0.05%. Statistical analysis shows a greater easiness (at a non-significant level for women torecognize happiness expressions, and for men to recognize anger expressions. The implications ofthese data are discussed, and possible extensions of this investigation in terms of sample size andcollege major of the participants.

  13. Automatic Facial Expression Recognition Based on Hybrid Approach

    Directory of Open Access Journals (Sweden)

    Ali K. K. Bermani

    2012-12-01

    Full Text Available The topic of automatic recognition of facial expressions deduce a lot of researchers in the late last century and has increased a great interest in the past few years. Several techniques have emerged in order to improve the efficiency of the recognition by addressing problems in face detection and extraction features in recognizing expressions. This paper has proposed automatic system for facial expression recognition which consists of hybrid approach in feature extraction phase which represent a combination between holistic and analytic approaches by extract 307 facial expression features (19 features by geometric, 288 feature by appearance. Expressions recognition is performed by using radial basis function (RBF based on artificial neural network to recognize the six basic emotions (anger, fear, disgust, happiness, surprise, sadness in addition to the natural.The system achieved recognition rate 97.08% when applying on person-dependent database and 93.98% when applying on person-independent.

  14. Continuous pain intensity estimation from facial expressions

    NARCIS (Netherlands)

    Kaltwang, Sebastian; Rudovic, Ognjen; Pantic, Maja

    2012-01-01

    Automatic pain recognition is an evolving research area with promising applications in health care. In this paper, we propose the first fully automatic approach to continuous pain intensity estimation from facial images. We first learn a set of independent regression functions for continuous pain in

  15. Continuous pain intensity estimation from facial expressions

    NARCIS (Netherlands)

    Kaltwang, Sebastian; Rudovic, Ognjen; Pantic, Maja

    2012-01-01

    Automatic pain recognition is an evolving research area with promising applications in health care. In this paper, we propose the first fully automatic approach to continuous pain intensity estimation from facial images. We first learn a set of independent regression functions for continuous pain

  16. Structure-preserving sparse decomposition for facial expression analysis.

    Science.gov (United States)

    Taheri, Sima; Qiang Qiu; Chellappa, Rama

    2014-08-01

    Although facial expressions can be decomposed in terms of action units (AUs) as suggested by the facial action coding system, there have been only a few attempts that recognize expression using AUs and their composition rules. In this paper, we propose a dictionary-based approach for facial expression analysis by decomposing expressions in terms of AUs. First, we construct an AU-dictionary using domain experts' knowledge of AUs. To incorporate the high-level knowledge regarding expression decomposition and AUs, we then perform structure-preserving sparse coding by imposing two layers of grouping over AU-dictionary atoms as well as over the test image matrix columns. We use the computed sparse code matrix for each expressive face to perform expression decomposition and recognition. Since domain experts' knowledge may not always be available for constructing an AU-dictionary, we also propose a structure-preserving dictionary learning algorithm, which we use to learn a structured dictionary as well as divide expressive faces into several semantic regions. Experimental results on publicly available expression data sets demonstrate the effectiveness of the proposed approach for facial expression analysis.

  17. Paedomorphic facial expressions give dogs a selective advantage.

    Directory of Open Access Journals (Sweden)

    Bridget M Waller

    Full Text Available How wolves were first domesticated is unknown. One hypothesis suggests that wolves underwent a process of self-domestication by tolerating human presence and taking advantage of scavenging possibilities. The puppy-like physical and behavioural traits seen in dogs are thought to have evolved later, as a byproduct of selection against aggression. Using speed of selection from rehoming shelters as a proxy for artificial selection, we tested whether paedomorphic features give dogs a selective advantage in their current environment. Dogs who exhibited facial expressions that enhance their neonatal appearance were preferentially selected by humans. Thus, early domestication of wolves may have occurred not only as wolf populations became tamer, but also as they exploited human preferences for paedomorphic characteristics. These findings, therefore, add to our understanding of early dog domestication as a complex co-evolutionary process.

  18. Spontaneous facial expressions of emotion of congenitally and noncongenitally blind individuals.

    Science.gov (United States)

    Matsumoto, David; Willingham, Bob

    2009-01-01

    The study of the spontaneous expressions of blind individuals offers a unique opportunity to understand basic processes concerning the emergence and source of facial expressions of emotion. In this study, the authors compared the expressions of congenitally and noncongenitally blind athletes in the 2004 Paralympic Games with each other and with those produced by sighted athletes in the 2004 Olympic Games. The authors also examined how expressions change from 1 context to another. There were no differences between congenitally blind, noncongenitally blind, and sighted athletes, either on the level of individual facial actions or in facial emotion configurations. Blind athletes did produce more overall facial activity, but these were isolated to head and eye movements. The blind athletes' expressions differentiated whether they had won or lost a medal match at 3 different points in time, and there were no cultural differences in expression. These findings provide compelling evidence that the production of spontaneous facial expressions of emotion is not dependent on observational learning but simultaneously demonstrates a learned component to the social management of expressions, even among blind individuals.

  19. Recognition, Expression, and Understanding Facial Expressions of Emotion in Adolescents with Nonverbal and General Learning Disabilities

    Science.gov (United States)

    Bloom, Elana; Heath, Nancy

    2010-01-01

    Children with nonverbal learning disabilities (NVLD) have been found to be worse at recognizing facial expressions than children with verbal learning disabilities (LD) and without LD. However, little research has been done with adolescents. In addition, expressing and understanding facial expressions is yet to be studied among adolescents with LD…

  20. A Developmental Examination of Amygdala Response to Facial Expressions

    Science.gov (United States)

    Guyer, Amanda E.; Monk, Christopher S.; McClure-Tone, Erin B.; Nelson, Eric E.; Roberson-Nay, Roxann; Adler, Abby D.; Fromm, Stephen J.; Leibenluft, Ellen; Pine, Daniel S.; Ernst, Monique

    2010-01-01

    Several lines of evidence implicate the amygdala in face– emotion processing, particularly for fearful facial expressions. Related findings suggest that face–emotion processing engages the amygdala within an interconnected circuitry that can be studied using a functional-connectivity approach. Past work also underscores important functional changes in the amygdala during development. Taken together, prior research on amygdala function and development reveals a need for more work examining developmental changes in the amygdala’s response to fearful faces and in amygdala functional connectivity during face processing. The present study used event-related functional magnetic resonance imaging to compare 31 adolescents (9–17 years old) and 30 adults (21–40 years old) on activation to fearful faces in the amygdala and other regions implicated in face processing. Moreover, these data were used to compare patterns of amygdala functional connectivity in adolescents and adults. During passive viewing, adolescents demonstrated greater amygdala and fusiform activation to fearful faces than did adults. Functional connectivity analysis revealed stronger connectivity between the amygdala and the hippocampus in adults than in adolescents. Within each group, variability in age did not correlate with amygdala response, and sex-related developmental differences in amygdala response were not found. Eye movement data collected outside of the magnetic resonance imaging scanner using the same task suggested that developmental differences in amygdala activation were not attributable to differences in eye-gaze patterns. Amygdala hyperactivation in response to fearful faces may explain increased vulnerability to affective disorders in adolescence; stronger amygdala–hippocampus connectivity in adults than adolescents may reflect maturation in learning or habituation to facial expressions. PMID:18345988

  1. Facial Expression Recognition Teaching to Preschoolers with Autism

    DEFF Research Database (Denmark)

    Christinaki, Eirini; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2013-01-01

    The recognition of facial expressions is important for the perception of emotions. Understanding emotions is essential in human communication and social interaction. Children with autism have been reported to exhibit deficits in the recognition of affective expressions. Their difficulties...... for teaching emotion recognition from facial expressions should occur as early as possible in order to be successful and to have a positive effect. It is claimed that Serious Games can be very effective in the areas of therapy and education for children with autism. However, those computer interventions...... an educational computer game, which provides physical interaction by employing natural user interface (NUI), we aim to support early intervention and to foster facial expression learning....

  2. Efficient Facial Expression and Face Recognition using Ranking Method

    Directory of Open Access Journals (Sweden)

    Murali Krishna kanala

    2015-06-01

    Full Text Available Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However, these facial expressions may be difficult to detect to the untrained eye. In this paper we implements facial expression recognition techniques using Ranking Method. The human face plays an important role in our social interaction, conveying people's identity. Using human face as a key to security, the biometrics face recognition technology has received significant attention in the past several years. Experiments are performed using standard database like surprise, sad and happiness. The universally accepted three principal emotions to be recognized are: surprise, sad and happiness along with neutral.

  3. Strategies for Perceiving Facial Expressions in Adults with Autism Spectrum Disorder

    Science.gov (United States)

    Walsh, Jennifer A.; Vida, Mark D.; Rutherford, M. D.

    2014-01-01

    Rutherford and McIntosh (J Autism Dev Disord 37:187-196, 2007) demonstrated that individuals with autism spectrum disorder (ASD) are more tolerant than controls of exaggerated schematic facial expressions, suggesting that they may use an alternative strategy when processing emotional expressions. The current study was designed to test this finding…

  4. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    Science.gov (United States)

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…

  5. The Role of Facial Expressions in Attention-Orienting in Adults and Infants

    Science.gov (United States)

    Rigato, Silvia; Menon, Enrica; Di Gangi, Valentina; George, Nathalie; Farroni, Teresa

    2013-01-01

    Faces convey many signals (i.e., gaze or expressions) essential for interpersonal interaction. We have previously shown that facial expressions of emotion and gaze direction are processed and integrated in specific combinations early in life. These findings open a number of developmental questions and specifically in this paper we address whether…

  6. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    Science.gov (United States)

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…

  7. Training Facial Expression Production in Children on the Autism Spectrum

    Science.gov (United States)

    Gordon, Iris; Pierce, Matthew D.; Bartlett, Marian S.; Tanaka, James W.

    2014-01-01

    Children with autism spectrum disorder (ASD) show deficits in their ability to produce facial expressions. In this study, a group of children with ASD and IQ-matched, typically developing (TD) children were trained to produce "happy" and "angry" expressions with the FaceMaze computer game. FaceMaze uses an automated computer…

  8. [The neural networks of facial expression].

    Science.gov (United States)

    Gordillo, F; Mestas, L; Castillo, G; Perez, M A; Lopez, R M; Arana, J M

    2017-02-01

    Introduccion. La percepcion de caras involucra una amplia red de conexiones entre regiones corticales y subcorticales que intercambian y sincronizan informacion a traves de haces de sustancia blanca. Este preciso sistema de comunicacion puede verse afectado tanto a traves de las propias estructuras como por las vias que las conectan. Objetivos. Delimitar el sustrato neuronal que subyace a la percepcion de la expresion facial y analizar los diferentes factores que participan modulando la integridad de esta red neuronal, con el fin de proponer mejoras en los programas de rehabilitacion. Desarrollo. Cuando la compleja red de conexiones que participa en la percepcion de la expresion facial se altera por traumatismos, patologias neurodegenerativas, trastornos del desarrollo, incluso por aislamiento social o contextos negativos, se deteriora tambien la capacidad para interactuar de manera adaptativa con el entorno. Conclusiones. La posibilidad de restaurar la integridad de la red neuronal encargada del procesamiento de la expresion facial pasa por tener en cuenta diferentes variables que en mayor o menor grado se han mostrado capaces de modificar la estructura o funcionalidad de las redes neuronales, como el entrenamiento aerobico, la estimulacion magnetica transcraneal, la estimulacion electrica transcraneal y el aprendizaje, sin bien estas variables estarian condicionadas por la edad, el tipo y evolucion del trastorno o el contexto generador, lo que plantearia la necesidad de protocolos de rehabilitacion ajustados y orientados a delimitar el sustrato neuronal del deficit.

  9. Facial Expression Recognition Using 3D Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Young-Hyen Byeon

    2014-12-01

    Full Text Available This paper is concerned with video-based facial expression recognition frequently used in conjunction with HRI (Human-Robot Interaction that can naturally interact between human and robot. For this purpose, we design a 3D-CNN(3D Convolutional Neural Networks by augmenting dimensionality reduction methods such as PCA(Principal Component Analysis and TMPCA(Tensor-based Multilinear Principal Component Analysis to recognize simultaneously the successive frames with facial expression images obtained through video camera. The 3D-CNN can achieve some degree of shift and deformation invariance using local receptive fields and spatial subsampling through dimensionality reduction of redundant CNN’s output. The experimental results on video-based facial expression database reveal that the presented method shows a good performance in comparison to the conventional methods such as PCA and TMPCA.

  10. Development of a System for Automatic Facial Expression Analysis

    Science.gov (United States)

    Diago, Luis A.; Kitaoka, Tetsuko; Hagiwara, Ichiro

    Automatic recognition of facial expressions can be an important component of natural human-machine interactions. While a lot of samples are desirable for estimating more accurately the feelings of a person (e.g. likeness) about a machine interface, in real world situation, only a small number of samples must be obtained because the high cost in collecting emotions from observed person. This paper proposes a system that solves this problem conforming to individual differences. A new method is developed for facial expression classification based on the combination of Holographic Neural Networks (HNN) and Type-2 Fuzzy Logic. For the recognition of emotions induced by facial expressions, compared with former HNN and Support Vector Machines (SVM) classifiers, proposed method achieved the best generalization performance using less learning time than SVM classifiers.

  11. Neural Responses to Rapid Facial Expressions of Fear and Surprise

    Directory of Open Access Journals (Sweden)

    Ke Zhao

    2017-05-01

    Full Text Available Facial expression recognition is mediated by a distributed neural system in humans that involves multiple, bilateral regions. There are six basic facial expressions that may be recognized in humans (fear, sadness, surprise, happiness, anger, and disgust; however, fearful faces and surprised faces are easily confused in rapid presentation. The functional organization of the facial expression recognition system embodies a distinction between these two emotions, which is investigated in the present study. A core system that includes the right parahippocampal gyrus (BA 30, fusiform gyrus, and amygdala mediates the visual recognition of fear and surprise. We found that fearful faces evoked greater activity in the left precuneus, middle temporal gyrus (MTG, middle frontal gyrus, and right lingual gyrus, whereas surprised faces were associated with greater activity in the right postcentral gyrus and left posterior insula. These findings indicate the importance of common and separate mechanisms of the neural activation that underlies the recognition of fearful and surprised faces.

  12. Impaired recognition of prosody and subtle emotional facial expressions in Parkinson's disease.

    Science.gov (United States)

    Buxton, Sharon L; MacDonald, Lorraine; Tippett, Lynette J

    2013-04-01

    Accurately recognizing the emotional states of others is crucial for successful social interactions and social relationships. Individuals with Parkinson's disease (PD) have shown deficits in emotional recognition abilities although findings have been inconsistent. This study examined recognition of emotions from prosody and from facial emotional expressions with three levels of subtlety, in 30 individuals with PD (without dementia) and 30 control participants. The PD group were impaired on the prosody task, with no differential impairments in specific emotions. PD participants were also impaired at recognizing facial expressions of emotion, with a significant association between how well they could recognize emotions in the two modalities, even after controlling for disease severity. When recognizing facial expressions, the PD group had no difficulty identifying prototypical Ekman and Friesen (1976) emotional faces, but were poorer than controls at recognizing the moderate and difficult levels of subtle expressions. They were differentially impaired at recognizing moderately subtle expressions of disgust and sad expressions at the difficult level. Notably, however, they were impaired at recognizing happy expressions at both levels of subtlety. Furthermore how well PD participants identified happy expressions conveyed by either face or voice was strongly related to accuracy in the other modality. This suggests dysfunction of overlapping components of the circuitry processing happy expressions in PD. This study demonstrates the usefulness of including subtle expressions of emotion, likely to be encountered in everyday life, when assessing recognition of facial expressions.

  13. Emotional Verbalization and Identification of Facial Expressions in Teenagers’ Communication

    Directory of Open Access Journals (Sweden)

    I. S. Ivanova

    2013-01-01

    Full Text Available The paper emphasizes the need for studying the subjective effectiveness criteria of interpersonal communication and importance of effective communication for personality development in adolescence. The problemof undeveloped representation of positive emotions in communication process is discussed. Both the identification and verbalization of emotions are regarded by the author as the basic communication skills. The experimental data regarding the longitude and age levels are described, the gender differences in identification and verbalization of emotions considered. The outcomes of experimental study demonstrate that the accuracy of facial emotional expressions of teenage boys and girls changes at different rates. The prospects of defining the age norms for identification and verbalization of emotions are analyzed.

  14. Visual decodification of some facial expressions through microimitation.

    Science.gov (United States)

    Ruggieri, V; Fiorenza, M; Sabatini, N

    1986-04-01

    We examined the level of muscular tension of mentalis muscle of 36 students in graphic design at rest and during the presentation of three slides reproducing facial expressions. Analysis showed an increase in the myographic level of mentalis muscle from the third second of measurement onwards after the presentation of the slide in which contraction of the chin was involved. We interpret this result by hypothesizing that the decodification of some facial expressions is realized through a microreproduction of the stimulus from the decodifying subject.

  15. Recognition of facial expressions of emotion in panic disorder.

    Science.gov (United States)

    Cai, Liqiang; Chen, Wanzhen; Shen, Yuedi; Wang, Xinling; Wei, Lili; Zhang, Yingchun; Wang, Wei; Chen, Wei

    2012-01-01

    Whether patients with panic disorder behave differently or not when recognizing the facial expressions of emotion remains unsettled. We tested 21 outpatients with panic disorder and 34 healthy subjects, with a photo set from the Matsumoto and Ekman Japanese and Caucasian facial expressions of emotion, which includes anger, contempt, disgust, fear, happiness, sadness, and surprise. Compared to the healthy subjects, patients showed lower accuracies when recognizing disgust and fear, but a higher accuracy when recognizing surprise. These results suggest that the altered specificity to these emotions leads tso self-awareness mechanisms to prevent further emotional reactions in panic disorder patients. Copyright © 2012 S. Karger AG, Basel.

  16. Recognition of Facial Expressions of Different Emotional Intensities in Patients with Frontotemporal Lobar Degeneration

    Directory of Open Access Journals (Sweden)

    Roy P. C. Kessels

    2007-01-01

    Full Text Available Behavioural problems are a key feature of frontotemporal lobar degeneration (FTLD. Also, FTLD patients show impairments in emotion processing. Specifically, the perception of negative emotional facial expressions is affected. Generally, however, negative emotional expressions are regarded as more difficult to recognize than positive ones, which thus may have been a confounding factor in previous studies. Also, ceiling effects are often present on emotion recognition tasks using full-blown emotional facial expressions. In the present study with FTLD patients, we examined the perception of sadness, anger, fear, happiness, surprise and disgust at different emotional intensities on morphed facial expressions to take task difficulty into account. Results showed that our FTLD patients were specifically impaired at the recognition of the emotion anger. Also, the patients performed worse than the controls on recognition of surprise, but performed at control levels on disgust, happiness, sadness and fear. These findings corroborate and extend previous results showing deficits in emotion perception in FTLD.

  17. Neural processing of emotional facial and semantic expressions in euthymic bipolar disorder (BD and its association with theory of mind (ToM.

    Directory of Open Access Journals (Sweden)

    Agustin Ibanez

    Full Text Available BACKGROUND: Adults with bipolar disorder (BD have cognitive impairments that affect face processing and social cognition. However, it remains unknown whether these deficits in euthymic BD have impaired brain markers of emotional processing. METHODOLOGY/PRINCIPAL FINDINGS: We recruited twenty six participants, 13 controls subjects with an equal number of euthymic BD participants. We used an event-related potential (ERP assessment of a dual valence task (DVT, in which faces (angry and happy, words (pleasant and unpleasant, and face-word simultaneous combinations are presented to test the effects of the stimulus type (face vs word and valence (positive vs. negative. All participants received clinical, neuropsychological and social cognition evaluations. ERP analysis revealed that both groups showed N170 modulation of stimulus type effects (face > word. BD patients exhibited reduced and enhanced N170 to facial and semantic valence, respectively. The neural source estimation of N170 was a posterior section of the fusiform gyrus (FG, including the face fusiform area (FFA. Neural generators of N170 for faces (FG and FFA were reduced in BD. In these patients, N170 modulation was associated with social cognition (theory of mind. CONCLUSIONS/SIGNIFICANCE: This is the first report of euthymic BD exhibiting abnormal N170 emotional discrimination associated with theory of mind impairments.

  18. Intelligent Avatar on E-Learning Using Facial Expression an Haptic

    Directory of Open Access Journals (Sweden)

    Ahmad Hoirul Basori

    2011-04-01

    Full Text Available he process of introducing emotion can be improved through three-dimensional (3D tutoring system. The problem that still not solved is how to provide realistic tutor (avatar in virtual environment. This paper propose an approach to teach children on understanding emotion sensation through facial expression and sense of touch (haptic.The algorithm is created by calculating constant factor (f based on maximum value of RGB and magnitude force then magnitude force range will be associated into particular colour. The Integration process will be started from rendering the facial expression then followed by adjusting the vibration power to emotion value. The result that achieved on experiment, it show around 71% students agree with the classification of magnitude force into emotion representation. Respondents commented that high magnitude force create similar sensation when respondents feel anger, while low magnitude force is more relaxing to respondents. Respondents also said that haptic and facial expression is very interactive and realistic.

  19. Moving to continuous facial expression space using the MPEG-4 facial definition parameter (FDP) set

    Science.gov (United States)

    Karpouzis, Kostas; Tsapatsoulis, Nicolas; Kollias, Stefanos D.

    2000-06-01

    Research in facial expression has concluded that at least six emotions, conveyed by human faces, are universally associated with distinct expressions. Sadness, anger, joy, fear, disgust and surprise are categories of expressions that are recognizable across cultures. In this work we form a relation between the description of the universal expressions and the MPEG-4 Facial Definition Parameter Set (FDP). We also investigate the relation between the movement of basic FDPs and the parameters that describe emotion-related words according to some classical psychological studies. In particular Whissel suggested that emotions are points in a space, which seem to occupy two dimensions: activation and evaluation. We show that some of the MPEG-4 Facial Animation Parameters (FAPs), approximated by the motion of the corresponding FDPs, can be combined by means of a fuzzy rule system to estimate the activation parameter. In this way variations of the six archetypal emotions can be achieved. Moreover, Plutchik concluded that emotion terms are unevenly distributed through the space defined by dimensions like Whissel's; instead they tend to form an approximately circular pattern, called 'emotion wheel,' modeled using an angular measure. The 'emotion wheel' can be defined as a reference for creating intermediate expressions from the universal ones, by interpolating the movement of dominant FDP points between neighboring basic expressions. By exploiting the relation between the movement of the basic FDP point and the activation and angular parameters we can model more emotions than the primary ones and achieve efficient recognition in video sequences.

  20. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    Science.gov (United States)

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  1. Neonatal pain facial expression: evaluating the primal face of pain.

    Science.gov (United States)

    Schiavenato, Martin; Byers, Jacquie F; Scovanner, Paul; McMahon, James M; Xia, Yinglin; Lu, Naiji; He, Hua

    2008-08-31

    The primal face of pain (PFP) is postulated to be a common and universal facial expression to pain, hardwired and present at birth. We evaluated its presence by applying a computer-based methodology consisting of "point-pair" comparisons captured from video to measure facial movement in the pain expression by way of change across two images: one image before and one image after a painful stimulus (heel-stick). Similarity of facial expression was analyzed in a sample of 57 neonates representing both sexes and 3 ethnic backgrounds (African American, Caucasian and Hispanic/Latino) while controlling for these extraneous and potentially modulating factors: feeding type (bottle, breast, or both), behavioral state (awake or asleep), and use of epidural and/or other perinatal anesthesia. The PFP is consistent with previous reports of expression of pain in neonates and is characterized by opening of the mouth, drawing in of the brows, and closing of the eyes. Although facial expression was not identical across or among groups, our analyses showed no particular clustering or unique display by sex, or ethnicity. The clinical significance of this commonality of pain display, and of the origin of its potential individual variation begs further evaluation.

  2. The face-specific N170 component is modulated by emotional facial expression

    Directory of Open Access Journals (Sweden)

    Tottenham Nim

    2007-01-01

    Full Text Available Abstract Background According to the traditional two-stage model of face processing, the face-specific N170 event-related potential (ERP is linked to structural encoding of face stimuli, whereas later ERP components are thought to reflect processing of facial affect. This view has recently been challenged by reports of N170 modulations by emotional facial expression. This study examines the time-course and topography of the influence of emotional expression on the N170 response to faces. Methods Dense-array ERPs were recorded in response to a set (n = 16 of fear and neutral faces. Stimuli were normalized on dimensions of shape, size and luminance contrast distribution. To minimize task effects related to facial or emotional processing, facial stimuli were irrelevant to a primary task of learning associative pairings between a subsequently presented visual character and a spoken word. Results N170 to faces showed a strong modulation by emotional facial expression. A split half analysis demonstrates that this effect was significant both early and late in the experiment and was therefore not associated with only the initial exposures of these stimuli, demonstrating a form of robustness against habituation. The effect of emotional modulation of the N170 to faces did not show significant interaction with the gender of the face stimulus, or hemisphere of recording sites. Subtracting the fear versus neutral topography provided a topography that itself was highly similar to the face N170. Conclusion The face N170 response can be influenced by emotional expressions contained within facial stimuli. The topography of this effect is consistent with the notion that fear stimuli exaggerates the N170 response itself. This finding stands in contrast to previous models suggesting that N170 processes linked to structural analysis of faces precede analysis of emotional expression, and instead may reflect early top-down modulation from neural systems involved in

  3. Personality Trait and Facial Expression Filter-Based Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Seongah Chin

    2013-02-01

    Full Text Available In this paper, we present technical approaches that bridge the gap in the research related to the use of brain‐computer interfaces for entertainment and facial expressions. Such facial expressions that reflect an individual’s personal traits can be used to better realize artificial facial expressions in a gaming environment based on a brain‐computer interface. First, an emotion extraction filter is introduced in order to classify emotions on the basis of the users’ brain signals in real time. Next, a personality trait filter is defined to classify extrovert and introvert types, which manifest as five traits: very extrovert, extrovert, medium, introvert and very introvert. In addition, facial expressions derived from expression rates are obtained by an extrovert‐introvert fuzzy model through its defuzzification process. Finally, we confirm this validation via an analysis of the variance of the personality trait filter, a k‐fold cross validation of the emotion extraction filter, an accuracy analysis, a user study of facial synthesis and a test case game.

  4. Electrocortical evidence for preferential processing of dynamic pain expressions compared to other emotional expressions.

    Science.gov (United States)

    Reicherts, Philipp; Wieser, Matthias J; Gerdes, Antje B M; Likowski, Katja U; Weyers, Peter; Mühlberger, Andreas; Pauli, Paul

    2012-09-01

    Decoding pain in others is of high individual and social benefit in terms of harm avoidance and demands for accurate care and protection. The processing of facial expressions includes both specific neural activation and automatic congruent facial muscle reactions. While a considerable number of studies investigated the processing of emotional faces, few studies specifically focused on facial expressions of pain. Analyses of brain activity and facial responses elicited by the perception of facial pain expressions in contrast to other emotional expressions may unravel the processing specificities of pain-related information in healthy individuals and may contribute to explaining attentional biases in chronic pain patients. In the present study, 23 participants viewed short video clips of neutral, emotional (joy, fear), and painful facial expressions while affective ratings, event-related brain responses, and facial electromyography (Musculus corrugator supercilii, M. orbicularis oculi, M. zygomaticus major, M. levator labii) were recorded. An emotion recognition task indicated that participants accurately decoded all presented facial expressions. Electromyography analysis suggests a distinct pattern of facial response detected in response to happy faces only. However, emotion-modulated late positive potentials revealed a differential processing of pain expressions compared to the other facial expressions, including fear. Moreover, pain faces were rated as most negative and highly arousing. Results suggest a general processing bias in favor of pain expressions. Findings are discussed in light of attentional demands of pain-related information and communicative aspects of pain expressions.

  5. Priming emotional facial expressions as evidenced by event-related brain potentials.

    Science.gov (United States)

    Werheid, Katja; Alpay, Gamze; Jentzsch, Ines; Sommer, Werner

    2005-02-01

    As human faces are important social signals in everyday life, processing of facial affect has recently entered into the focus of neuroscientific research. In the present study, priming of faces showing the same emotional expression was measured with the help of event-related potentials (ERPs) in order to investigate the temporal characteristics of processing facial expressions. Participants classified portraits of unfamiliar persons according to their emotional expression (happy or angry). The portraits were either preceded by the face of a different person expressing the same affect (primed) or the opposite affect (unprimed). ERPs revealed both early and late priming effects, independent of stimulus valence. The early priming effect was characterized by attenuated frontal ERP amplitudes between 100 and 200 ms in response to primed targets. Its dipole sources were localised in the inferior occipitotemporal cortex, possibly related to the detection of expression-specific facial configurations, and in the insular cortex, considered to be involved in affective processes. The late priming effect, an enhancement of the late positive potential (LPP) following unprimed targets, may evidence greater relevance attributed to a change of emotional expressions. Our results (i) point to the view that a change of affect-related facial configuration can be detected very early during face perception and (ii) support previous findings on the amplitude of the late positive potential being rather related to arousal than to the specific valence of an emotional signal.

  6. Perception of dynamic facial emotional expressions in adolescents with autism spectrum disorders

    NARCIS (Netherlands)

    Kessels, R.P.C.; Spee, P.S.; Hendriks, A.W.C.J.

    2010-01-01

    Previous studies have shown deficits in the perception of static emotional facial expressions in individuals with autism spectrum disorders (ASD), but results are inconclusive. Possibly, using dynamic facial stimuli expressing emotions at different levels of intensities may produce more robust

  7. Perception of dynamic facial emotional expressions in adolescents with autism spectrum disorders.

    NARCIS (Netherlands)

    Kessels, R.P.C.; Spee, P.; Hendriks, A.W.

    2010-01-01

    Previous studies have shown deficits in the perception of static emotional facial expressions in individuals with autism spectrum disorders (ASD), but results are inconclusive. Possibly, using dynamic facial stimuli expressing emotions at different levels of intensities may produce more robust

  8. Continuous emotion detection using EEG signals and facial expressions

    NARCIS (Netherlands)

    Soleymani, Mohammad; Asghari-Esfeden, Sadjad; Pantic, Maja; Fu, Yun

    Emotions play an important role in how we select and consume multimedia. Recent advances on affect detection are focused on detecting emotions continuously. In this paper, for the first time, we continuously detect valence from electroencephalogram (EEG) signals and facial expressions in response to

  9. The first facial expression recognition and analysis challenge

    NARCIS (Netherlands)

    Valstar, Michel F.; Jiang, Bihan; Mehu, Marc; Pantic, Maja; Scherer, Klaus

    2011-01-01

    Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly u

  10. Categorical Perception of Emotional Facial Expressions in Preschoolers

    Science.gov (United States)

    Cheal, Jenna L.; Rutherford, M. D.

    2011-01-01

    Adults perceive emotional facial expressions categorically. In this study, we explored categorical perception in 3.5-year-olds by creating a morphed continuum of emotional faces and tested preschoolers' discrimination and identification of them. In the discrimination task, participants indicated whether two examples from the continuum "felt the…

  11. The accuracy of intensity ratings of emotions from facial expressions

    Directory of Open Access Journals (Sweden)

    Kostić Aleksandra P.

    2003-01-01

    Full Text Available The results of a study on the accuracy of intensity ratings of emotion from facial expressions are reported. The so far research into the field has shown that spontaneous facial expressions of basic emotions are a reliable source of information about the category of emotion. The question is raised of whether this can be true for the intensity of emotion as well and whether the accuracy of intensity ratings is dependent on the observer’s sex and vocational orientation. A total of 228 observers of both sexes and of various vocational orientations rated the emotional intensity of presented facial expressions on a scale-range from 0 to 8. The results have supported the hypothesis that spontaneous facial expressions of basic emotions do provide sufficient information about emotional intensity. The hypothesis on the interdependence between the accuracy of intensity ratings of emotion and the observer’s sex and vocational orientation has not been confirmed. However, the accuracy of intensity rating has been proved to vary with the category of the emotion presented.

  12. The first facial expression recognition and analysis challenge

    NARCIS (Netherlands)

    Valstar, Michel F.; Jiang, Bihan; Mehu, Marc; Pantic, Maja; Scherer, Klaus

    Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly

  13. Mirroring Facial Expressions and Emotions in Dyadic Conversations

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2016-01-01

    of smiles and laughs, and one fifth of the occurrences of raised eyebrows are mirrored in the data. Moreover some facial traits in co-occurring expressions co-occur more often than it would be expected by chance. Finally, amusement, and to a lesser extent friendliness, are often emotions shared by both...

  14. Further Evidence on Preschoolers' Interpretation of Facial Expressions.

    Science.gov (United States)

    Bullock, Merry; Russell, James A.

    1985-01-01

    Assessed through two studies the organization and basis for preschool children's (n=240) and adults' (n=60) categorization of emotions. In one, children and adults chose facial expressions that exemplify emotion categories such as fear, anger, and happiness. In another they grouped emotions differing in arousal level or pleasure-displeasure…

  15. Specificity of Facial Expression Labeling Deficits in Childhood Psychopathology

    Science.gov (United States)

    Guyer, Amanda E.; McClure, Erin B.; Adler, Abby D.; Brotman, Melissa A.; Rich, Brendan A.; Kimes, Alane S.; Pine, Daniel S.; Ernst, Monique; Leibenluft, Ellen

    2007-01-01

    Background: We examined whether face-emotion labeling deficits are illness-specific or an epiphenomenon of generalized impairment in pediatric psychiatric disorders involving mood and behavioral dysregulation. Method: Two hundred fifty-two youths (7-18 years old) completed child and adult facial expression recognition subtests from the Diagnostic…

  16. The role of facial expression in resisting enjoyable advertisements

    NARCIS (Netherlands)

    Lewiński, P.

    2015-01-01

    This dissertation on consumer resistance to enjoyable advertisements is positioned in the areas of persuasive communication and social psychology. In this thesis, it is argued that consumers can resist persuasion by controlling their facial expressions of emotion when exposed to an advertisement. Fo

  17. Self-relevance appraisal influences facial reactions to emotional body expressions.

    Directory of Open Access Journals (Sweden)

    Julie Grèzes

    Full Text Available People display facial reactions when exposed to others' emotional expressions, but exactly what mechanism mediates these facial reactions remains a debated issue. In this study, we manipulated two critical perceptual features that contribute to determining the significance of others' emotional expressions: the direction of attention (toward or away from the observer and the intensity of the emotional display. Electromyographic activity over the corrugator muscle was recorded while participants observed videos of neutral to angry body expressions. Self-directed bodies induced greater corrugator activity than other-directed bodies; additionally corrugator activity was only influenced by the intensity of anger expresssed by self-directed bodies. These data support the hypothesis that rapid facial reactions are the outcome of self-relevant emotional processing.

  18. Traditional facial tattoos disrupt face recognition processes.

    Science.gov (United States)

    Buttle, Heather; East, Julie

    2010-01-01

    Factors that are important to successful face recognition, such as features, configuration, and pigmentation/reflectance, are all subject to change when a face has been engraved with ink markings. Here we show that the application of facial tattoos, in the form of spiral patterns (typically associated with the Maori tradition of a Moko), disrupts face recognition to a similar extent as face inversion, with recognition accuracy little better than chance performance (2AFC). These results indicate that facial tattoos can severely disrupt our ability to recognise a face that previously did not have the pattern.

  19. Is Happy Facial Expression Identified by the Left or Right Hemisphere?

    Institute of Scientific and Technical Information of China (English)

    Zhou Xiaolin; Shao Liping

    2005-01-01

    A critical difference between the right hemisphere hypothesis and valence hypothesis of emotion processing is whether the processing of happy facial expressions is lateralized to the right or left hemisphere. In this study participants from a Chinese sample were asked to classify happy or neutral facial expressions presented either bilaterally in both visual fields or unilaterally in the left visual field (LVF) or right visual field (RVF) . They were required to make the speeded responses using either the left or right hand. It was found that for both left and right hand responses, happy (and neutral) expressions presented in the LVF were identified faster than happy (and neutral) expressions presented in the RVF. Bilateral presentation showed no further advantage over LVF presentation.Moreover, left hand responses were generally faster than right hand responses, although this effect was more pronounced for neutral expression. These findings were interpreted as supporting the right hemisphere hypothesis, with happy expression being identified initially by the right hemisphere.

  20. Seeing a haptically explored face: visual facial-expression aftereffect from haptic adaptation to a face.

    Science.gov (United States)

    Matsumiya, Kazumichi

    2013-10-01

    Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.

  1. Transcranial magnetic stimulation disrupts the perception and embodiment of facial expressions.

    Science.gov (United States)

    Pitcher, David; Garrido, Lúcia; Walsh, Vincent; Duchaine, Bradley C

    2008-09-03

    Theories of embodied cognition propose that recognizing facial expressions requires visual processing followed by simulation of the somatovisceral responses associated with the perceived expression. To test this proposal, we targeted the right occipital face area (rOFA) and the face region of right somatosensory cortex (rSC) with repetitive transcranial magnetic stimulation (rTMS) while participants discriminated facial expressions. rTMS selectively impaired discrimination of facial expressions at both sites but had no effect on a matched face identity task. Site specificity within the rSC was demonstrated by targeting rTMS at the face and finger regions while participants performed the expression discrimination task. rTMS targeted at the face region impaired task performance relative to rTMS targeted at the finger region. To establish the temporal course of visual and somatosensory contributions to expression processing, double-pulse TMS was delivered at different times to rOFA and rSC during expression discrimination. Accuracy dropped when pulses were delivered at 60-100 ms at rOFA and at 100-140 and 130-170 ms at rSC. These sequential impairments at rOFA and rSC support embodied accounts of expression recognition as well as hierarchical models of face processing. The results also demonstrate that nonvisual cortical areas contribute during early stages of expression processing.

  2. [Emotional intelligence and oscillatory responses on the emotional facial expressions].

    Science.gov (United States)

    Kniazev, G G; Mitrofanova, L G; Bocharov, A V

    2013-01-01

    Emotional intelligence-related differences in oscillatory responses to emotional facial expressions were investigated in 48 subjects (26 men and 22 women) in age 18-30 years. Participants were instructed to evaluate emotional expression (angry, happy and neutral) of each presented face on an analog scale ranging from -100 (very hostile) to + 100 (very friendly). High emotional intelligence (EI) participants were found to be more sensitive to the emotional content of the stimuli. It showed up both in their subjective evaluation of the stimuli and in a stronger EEG theta synchronization at an earlier (between 100 and 500 ms after face presentation) processing stage. Source localization using sLORETA showed that this effect was localized in the fusiform gyrus upon the presentation of angry faces and in the posterior cingulate gyrus upon the presentation of happy faces. At a later processing stage (500-870 ms) event-related theta synchronization in high emotional intelligence subject was higher in the left prefrontal cortex upon the presentation of happy faces, but it was lower in the anterior cingulate cortex upon presentation of angry faces. This suggests the existence of a mechanism that can be selectively increase the positive emotions and reduce negative emotions.

  3. Surface Electromyography-Based Facial Expression Recognition in Bi-Polar Configuration

    Directory of Open Access Journals (Sweden)

    Mahyar Hamedi

    2011-01-01

    Full Text Available Problem statement: Facial expression recognition has been improved recently and it has become a significant issue in diagnostic and medical fields, particularly in the areas of assistive technology and rehabilitation. Apart from their usefulness, there are some problems in their applications like peripheral conditions, lightening, contrast and quality of video and images. Approach: Facial Action Coding System (FACS and some other methods based on images or videos were applied. This study proposed two methods for recognizing 8 different facial expressions such as natural (rest, happiness in three conditions, anger, rage, gesturing ‘a’ like in apple word and gesturing no by pulling up the eyebrows based on Three-channels in Bi-polar configuration by SEMG. Raw signals were processed in three main steps (filtration, feature extraction and active features selection sequentially. Processed data was fed into Support Vector Machine and Fuzzy C-Means classifiers for being classified into 8 facial expression groups. Results: 91.8 and 80.4% recognition ratio had been achieved for FCM and SVM respectively. Conclusion: The confirmed enough accuracy and power in this field of study and FCM showed its better ability and performance in comparison with SVM. It’s expected that in near future, new approaches in the frequency bandwidth of each facial gesture will provide better results.

  4. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    Science.gov (United States)

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain.

  5. Facial Expression Recognition Deficits and Faulty Learning: Implications for Theoretical Models and Clinical Applications

    Science.gov (United States)

    Sheaffer, Beverly L.; Golden, Jeannie A.; Averett, Paige

    2009-01-01

    The ability to recognize facial expressions of emotion is integral in social interaction. Although the importance of facial expression recognition is reflected in increased research interest as well as in popular culture, clinicians may know little about this topic. The purpose of this article is to discuss facial expression recognition literature…

  6. Using Video Modeling to Teach Children with PDD-NOS to Respond to Facial Expressions

    Science.gov (United States)

    Axe, Judah B.; Evans, Christine J.

    2012-01-01

    Children with autism spectrum disorders often exhibit delays in responding to facial expressions, and few studies have examined teaching responding to subtle facial expressions to this population. We used video modeling to train 3 participants with PDD-NOS (age 5) to respond to eight facial expressions: approval, bored, calming, disapproval,…

  7. Does Parkinson's disease lead to alterations in the facial expression of pain?

    NARCIS (Netherlands)

    Priebe, Janosch A; Kunz, Miriam; Morcinek, Christian; Rieckmann, Peter; Lautenbacher, Stefan

    2015-01-01

    Hypomimia which refers to a reduced degree in facial expressiveness is a common sign in Parkinson's disease (PD). The objective of our study was to investigate how hypomimia affects PD patients' facial expression of pain. The facial expressions of 23 idiopathic PD patients in the Off-phase (without

  8. Reconstructing dynamic mental models of facial expressions in prosopagnosia reveals distinct representations for identity and expression.

    Science.gov (United States)

    Richoz, Anne-Raphaëlle; Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G; Caldara, Roberto

    2015-04-01

    The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits

  9. Recognition of Emotional and Nonemotional Facial Expressions: A Comparison between Williams Syndrome and Autism

    Science.gov (United States)

    Lacroix, Agnes; Guidetti, Michele; Roge, Bernadette; Reilly, Judy

    2009-01-01

    The aim of our study was to compare two neurodevelopmental disorders (Williams syndrome and autism) in terms of the ability to recognize emotional and nonemotional facial expressions. The comparison of these two disorders is particularly relevant to the investigation of face processing and should contribute to a better understanding of social…

  10. Recognition of Emotional and Nonemotional Facial Expressions: A Comparison between Williams Syndrome and Autism

    Science.gov (United States)

    Lacroix, Agnes; Guidetti, Michele; Roge, Bernadette; Reilly, Judy

    2009-01-01

    The aim of our study was to compare two neurodevelopmental disorders (Williams syndrome and autism) in terms of the ability to recognize emotional and nonemotional facial expressions. The comparison of these two disorders is particularly relevant to the investigation of face processing and should contribute to a better understanding of social…

  11. Algorithms for Facial Expression Action Tracking and Facial Expression Recognition%人脸表情运动跟踪与表情识别算法

    Institute of Scientific and Technical Information of China (English)

    李於俊; 汪增福

    2011-01-01

    对于人脸视频中的每一帧,提出一种静态人脸表情识别算法,人脸表情运动参数被提取出来后,根据表情生理知识来分类表情;为了应对知识的不足,提出一种静态表情识别和动态表情识别相结合的算法,以基于多类表情马尔可夫链和粒子滤波的统计框架结合生理知识来同时提取人脸表情运动和识别表情.实验证明了算法的有效性.%For each frame in the facial video sequence, an algorithm for static facial expression recognition is proposed firstly, facial expression is recognized after facial actions are retrieved according to facial expression knowledge. Coping with lacking of knowledge , an algorithm combining static facial expression recognition and dynamic facial expression recognition is proposed, facial actions as well as facial expression are simultaneously retrieved using a stochastic framework based on multi-class expressional Markov chains, particle filter and facial expression knowledge. Experiment result confirms the effective of these algorithms.

  12. Single-trial ERP analysis reveals facial expression category in a three-stage scheme.

    Science.gov (United States)

    Zhang, Dandan; Luo, Wenbo; Luo, Yuejia

    2013-05-28

    Emotional faces are salient stimuli that play a critical role in social interactions. Following up on previous research suggesting that the event-related potentials (ERPs) show differential amplitudes in response to various facial expressions, the current study used trial-to-trial variability assembled from six discriminating ERP components to predict the facial expression categories in individual trials. In an experiment involved 17 participants, fearful trials were differentiated from non-fearful trials as early as the intervals of N1 and P1, with a mean predictive accuracy of 87%. Single-trial features in the occurrence of N170 and vertex positive potential could distinguish between emotional and neutral expressions (accuracy=90%). Finally, the trials associated with fearful, happy, and neutral faces were completely separated during the window of N3 and P3 (accuracy=83%). These categorization findings elucidated the temporal evolution of facial expression extraction, and demonstrated that the spatio-temporal characteristics of single-trial ERPs can distinguish facial expressions according to a three-stage scheme, with "fear popup," "emotional/unemotional discrimination," and "complete separation" as processing stages. This work constitutes the first examination of neural processing dynamics beyond multitrial ERP averaging, and directly relates the prediction performance of single-trial classifiers to the progressive brain functions of emotional face discrimination.

  13. Investigating the brain basis of facial expression perception using multi-voxel pattern analysis.

    Science.gov (United States)

    Wegrzyn, Martin; Riehle, Marcel; Labudda, Kirsten; Woermann, Friedrich; Baumgartner, Florian; Pollmann, Stefan; Bien, Christian G; Kissler, Johanna

    2015-08-01

    Humans can readily decode emotion expressions from faces and perceive them in a categorical manner. The model by Haxby and colleagues proposes a number of different brain regions with each taking over specific roles in face processing. One key question is how these regions directly compare to one another in successfully discriminating between various emotional facial expressions. To address this issue, we compared the predictive accuracy of all key regions from the Haxby model using multi-voxel pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data. Regions of interest were extracted using independent meta-analytical data. Participants viewed four classes of facial expressions (happy, angry, fearful and neutral) in an event-related fMRI design, while performing an orthogonal gender recognition task. Activity in all regions allowed for robust above-chance predictions. When directly comparing the regions to one another, fusiform gyrus and superior temporal sulcus (STS) showed highest accuracies. These results underscore the role of the fusiform gyrus as a key region in perception of facial expressions, alongside STS. The study suggests the need for further specification of the relative role of the various brain areas involved in the perception of facial expression. Face processing appears to rely on more interactive and functionally overlapping neural mechanisms than previously conceptualised.

  14. Neural substrates of human facial expression of pleasant emotion induced by comic films: a PET Study.

    Science.gov (United States)

    Iwase, Masao; Ouchi, Yasuomi; Okada, Hiroyuki; Yokoyama, Chihiro; Nobezawa, Shuji; Yoshikawa, Etsuji; Tsukada, Hideo; Takeda, Masaki; Yamashita, Ko; Takeda, Masatoshi; Yamaguti, Kouzi; Kuratsune, Hirohiko; Shimizu, Akira; Watanabe, Yasuyoshi

    2002-10-01

    Laughter or smile is one of the emotional expressions of pleasantness with characteristic contraction of the facial muscles, of which the neural substrate remains to be explored. This currently described study is the first to investigate the generation of human facial expression of pleasant emotion using positron emission tomography and H(2)(15)O. Regional cerebral blood flow (rCBF) during laughter/smile induced by visual comics and the magnitude of laughter/smile indicated significant correlation in the bilateral supplementary motor area (SMA) and left putamen (P < 0.05, corrected), but no correlation in the primary motor area (M1). In the voluntary facial movement, significant correlation between rCBF and the magnitude of EMG was found in the face area of bilateral M1 and the SMA (P < 0.001, uncorrected). Laughter/smile, as opposed to voluntary movement, activated the visual association areas, left anterior temporal cortex, left uncus, and orbitofrontal and medial prefrontal cortices (P < 0.05, corrected), whereas voluntary facial movement generated by mimicking a laughing/smiling face activated the face area of the left M1 and bilateral SMA, compared with laughter/smile (P < 0.05, corrected). We demonstrated distinct neural substrates of emotional and volitional facial expression and defined cognitive and experiential processes of a pleasant emotion, laughter/smile.

  15. Facial expressions of emotion are not culturally universal.

    Science.gov (United States)

    Jack, Rachael E; Garrod, Oliver G B; Yu, Hui; Caldara, Roberto; Schyns, Philippe G

    2012-05-08

    Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars.

  16. Facial Expression Recognition Teaching to Preschoolers with Autism

    DEFF Research Database (Denmark)

    Christinaki, Eirini; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2013-01-01

    for teaching emotion recognition from facial expressions should occur as early as possible in order to be successful and to have a positive effect. It is claimed that Serious Games can be very effective in the areas of therapy and education for children with autism. However, those computer interventions...... require considerable skills for interaction. Before the age of 6, most children with autism do not have such basic motor skills in order to manipulate a mouse or a keyboard. Our approach takes account of the specific characteristics of preschoolers with autism and their physical inabilities. By creating......The recognition of facial expressions is important for the perception of emotions. Understanding emotions is essential in human communication and social interaction. Children with autism have been reported to exhibit deficits in the recognition of affective expressions. Their difficulties...

  17. Forming impressions: effects of facial expression and gender stereotypes.

    Science.gov (United States)

    Hack, Tay

    2014-04-01

    The present study of 138 participants explored how facial expressions and gender stereotypes influence impressions. It was predicted that images of smiling women would be evaluated more favorably on traits reflecting warmth, and that images of non-smiling men would be evaluated more favorably on traits reflecting competence. As predicted, smiling female faces were rated as more warm; however, contrary to prediction, perceived competence of male faces was not affected by facial expression. Participants' female stereotype endorsement was a significant predictor for evaluations of female faces; those who ascribed more strongly to traditional female stereotypes reported the most positive impressions of female faces displaying a smiling expression. However, a similar effect was not found for images of men; endorsement of traditional male stereotypes did not predict participants' impressions of male faces.

  18. Spontaneous facial expression in unscripted social interactions can be measured automatically.

    Science.gov (United States)

    Girard, Jeffrey M; Cohn, Jeffrey F; Jeni, Laszlo A; Sayette, Michael A; De la Torre, Fernando

    2015-12-01

    Methods to assess individual facial actions have potential to shed light on important behavioral phenomena ranging from emotion and social interaction to psychological disorders and health. However, manual coding of such actions is labor intensive and requires extensive training. To date, establishing reliable automated coding of unscripted facial actions has been a daunting challenge impeding development of psychological theories and applications requiring facial expression assessment. It is therefore essential that automated coding systems be developed with enough precision and robustness to ease the burden of manual coding in challenging data involving variation in participant gender, ethnicity, head pose, speech, and occlusion. We report a major advance in automated coding of spontaneous facial actions during an unscripted social interaction involving three strangers. For each participant (n = 80, 47 % women, 15 % Nonwhite), 25 facial action units (AUs) were manually coded from video using the Facial Action Coding System. Twelve AUs occurred more than 3 % of the time and were processed using automated FACS coding. Automated coding showed very strong reliability for the proportion of time that each AU occurred (mean intraclass correlation = 0.89), and the more stringent criterion of frame-by-frame reliability was moderate to strong (mean Matthew's correlation = 0.61). With few exceptions, differences in AU detection related to gender, ethnicity, pose, and average pixel intensity were small. Fewer than 6 % of frames could be coded manually but not automatically. These findings suggest automated FACS coding has progressed sufficiently to be applied to observational research in emotion and related areas of study.

  19. Facial emotion and identity processing development in 5- to 15-year-old children.

    Science.gov (United States)

    Johnston, Patrick J; Kaufman, Jordy; Bajic, Julie; Sercombe, Alicia; Michie, Patricia T; Karayanidis, Frini

    2011-01-01

    Most developmental studies of emotional face processing to date have focused on infants and very young children. Additionally, studies that examine emotional face processing in older children do not distinguish development in emotion and identity face processing from more generic age-related cognitive improvement. In this study, we developed a paradigm that measures processing of facial expression in comparison to facial identity and complex visual stimuli. The three matching tasks were developed (i.e., facial emotion matching, facial identity matching, and butterfly wing matching) to include stimuli of similar level of discriminability and to be equated for task difficulty in earlier samples of young adults. Ninety-two children aged 5-15 years and a new group of 24 young adults completed these three matching tasks. Young children were highly adept at the butterfly wing task relative to their performance on both face-related tasks. More importantly, in older children, development of facial emotion discrimination ability lagged behind that of facial identity discrimination.

  20. Facial Emotion and Identity Processing Development in 5- to 15-Year-Old Children

    Directory of Open Access Journals (Sweden)

    Patrick eJohnston

    2011-02-01

    Full Text Available Most developmental studies of emotional face processing to date have focussed on infants and very young children. Additionally, studies that examine emotional face processing in older children do not distinguish development in emotion and identity face processing from more generic age-related cognitive improvement. In this study, we developed a paradigm that measures processing of facial expression in comparison to facial identity and complex visual stimuli. The three matching tasks were developed (i.e., facial emotion matching, facial identity matching and butterfly wing matching to include stimuli of similar level of discriminability and to be equated for task difficulty in earlier samples of young adults. Ninety two children aged 5 to 15 years and a new group of 24 young adults completed these three matching tasks. Young children were highly adept at the butterfly wing task relative to their performance on both face-related tasks. More importantly, in older children, development of facial emotion discrimination ability lagged behind that of facial identity discrimination.

  1. Social alienation in schizophrenia patients: association with insula responsiveness to facial expressions of disgust.

    Directory of Open Access Journals (Sweden)

    Christian Lindner

    Full Text Available INTRODUCTION: Among the functional neuroimaging studies on emotional face processing in schizophrenia, few have used paradigms with facial expressions of disgust. In this study, we investigated whether schizophrenia patients show less insula activation to macro-expressions (overt, clearly visible expressions and micro-expressions (covert, very brief expressions of disgust than healthy controls. Furthermore, departing from the assumption that disgust faces signal social rejection, we examined whether perceptual sensitivity to disgust is related to social alienation in patients and controls. We hypothesized that high insula responsiveness to facial disgust predicts social alienation. METHODS: We used functional magnetic resonance imaging to measure insula activation in 36 schizophrenia patients and 40 healthy controls. During scanning, subjects passively viewed covert and overt presentations of disgust and neutral faces. To measure social alienation, a social loneliness scale and an agreeableness scale were administered. RESULTS: Schizophrenia patients exhibited reduced insula activation in response to covert facial expressions of disgust. With respect to macro-expressions of disgust, no between-group differences emerged. In patients, insula responsiveness to covert faces of disgust was positively correlated with social loneliness. Furthermore, patients' insula responsiveness to covert and overt faces of disgust was negatively correlated with agreeableness. In controls, insula responsiveness to covert expressions of disgust correlated negatively with agreeableness. DISCUSSION: Schizophrenia patients show reduced insula responsiveness to micro-expressions but not macro-expressions of disgust compared to healthy controls. In patients, low agreeableness was associated with stronger insula response to micro- and macro-expressions of disgust. Patients with a strong tendency to feel uncomfortable with social interactions appear to be characterized by a

  2. Effects of Facial Expressions on Recognizing Emotions in Dance Movements

    Directory of Open Access Journals (Sweden)

    Nao Shikanai

    2011-10-01

    Full Text Available Effects of facial expressions on recognizing emotions expressed in dance movements were investigated. Dancers expressed three emotions: joy, sadness, and anger through dance movements. We used digital video cameras and a 3D motion capturing system to record and capture the movements. We then created full-video displays with an expressive face, full-video displays with an unexpressive face, stick figure displays (no face, or point-light displays (no face from these data using 3D animation software. To make point-light displays, 13 markers were attached to the body of each dancer. We examined how accurately observers were able to identify the expression that the dancers intended to create through their dance movements. Dance experienced and inexperienced observers participated in the experiment. They watched the movements and rated the compatibility of each emotion with each movement on a 5-point Likert scale. The results indicated that both experienced and inexperienced observers could identify all the emotions that dancers intended to express. Identification scores for dance movements with an expressive face were higher than for other expressions. This finding indicates that facial expressions affect the identification of emotions in dance movements, whereas only bodily expressions provide sufficient information to recognize emotions.

  3. The facial expression of schizophrenic patients applied with infrared thermal facial image sequence

    National Research Council Canada - National Science Library

    Bo-Lin Jian; Chieh-Li Chen; Wen-Lin Chu; Min-Wei Huang

    2017-01-01

    .... Thus, this study used non-contact infrared thermal facial images (ITFIs) to analyze facial temperature changes evoked by different emotions in moderately and markedly ill schizophrenia patients...

  4. A Modified Sparse Representation Method for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2016-01-01

    Full Text Available In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit method is used to speed up the convergence of OMP (orthogonal matching pursuit. Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan’s JAFFE and CMU’s CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.

  5. Americans and Palestinians judge spontaneous facial expressions of emotion.

    Science.gov (United States)

    Kayyal, Mary H; Russell, James A

    2013-10-01

    The claim that certain emotions are universally recognized from facial expressions is based primarily on the study of expressions that were posed. The current study was of spontaneous facial expressions shown by aborigines in Papua New Guinea (Ekman, 1980); 17 faces claimed to convey one (or, in the case of blends, two) basic emotions and five faces claimed to show other universal feelings. For each face, participants rated the degree to which each of the 12 predicted emotions or feelings was conveyed. The modal choice for English-speaking Americans (n = 60), English-speaking Palestinians (n = 60), and Arabic-speaking Palestinians (n = 44) was the predicted label for only 4, 5, and 4, respectively, of the 17 faces for basic emotions, and for only 2, 2, and 2, respectively, of the 5 faces for other feelings. Observers endorsed the predicted emotion or feeling moderately often (65%, 55%, and 44%), but also denied it moderately often (35%, 45%, and 56%). They also endorsed more than one (or, for blends, two) label(s) in each face-on average, 2.3, 2.3, and 1.5 of basic emotions and 2.6, 2.2, and 1.5 of other feelings. There were both similarities and differences across culture and language, but the emotional meaning of a facial expression is not well captured by the predicted label(s) or, indeed, by any single label.

  6. Face or body? Oxytocin improves perception of emotions from facial expressions in incongruent emotional body context.

    Science.gov (United States)

    Perry, Anat; Aviezer, Hillel; Goldstein, Pavel; Palgi, Sharon; Klein, Ehud; Shamay-Tsoory, Simone G

    2013-11-01

    The neuropeptide oxytocin (OT) has been repeatedly reported to play an essential role in the regulation of social cognition in humans in general, and specifically in enhancing the recognition of emotions from facial expressions. The later was assessed in different paradigms that rely primarily on isolated and decontextualized emotional faces. However, recent evidence has indicated that the perception of basic facial expressions is not context invariant and can be categorically altered by context, especially body context, at early perceptual levels. Body context has a strong effect on our perception of emotional expressions, especially when the actual target face and the contextually expected face are perceptually similar. To examine whether and how OT affects emotion recognition, we investigated the role of OT in categorizing facial expressions in incongruent body contexts. Our results show that in the combined process of deciphering emotions from facial expressions and from context, OT gives an advantage to the face. This advantage is most evident when the target face and the contextually expected face are perceptually similar.

  7. Facial expression identification using 3D geometric features from Microsoft Kinect device

    Science.gov (United States)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  8. A 3D Facial Expression Tracking Method Using Piecewise Deformations

    Directory of Open Access Journals (Sweden)

    Jing Chi

    2013-02-01

    Full Text Available We present a new fast method for 3D facial expression tracking based on piecewise non-rigid deformations. Our method takes as input a video-rate sequence of face meshes that record the shape and time-varying expressions of a human face, and deforms a source mesh to match each input mesh to output a new mesh sequence with the same connectivity that reflects the facial shape and expressional variations. In mesh matching, we automatically segment the source mesh and estimate a non-rigid transformation for each segment to approximate the input mesh closely. Piecewise non-rigid transformation significantly reduces computational complexity and improves tracking speed because it greatly decreases the unknowns to be estimated. Our method can also achieve desired tracking accuracy because segmentation can be adjusted automatically and flexibly to approximate arbitrary deformations on the input mesh. Experiments demonstrate the efficiency of our method.

  9. Facial expression recognition in Alzheimer's disease: a longitudinal study.

    Science.gov (United States)

    Torres, Bianca; Santos, Raquel Luiza; Sousa, Maria Fernanda Barroso de; Simões Neto, José Pedro; Nogueira, Marcela Moreira Lima; Belfort, Tatiana T; Dias, Rachel; Dourado, Marcia Cristina Nascimento

    2015-05-01

    Facial recognition is one of the most important aspects of social cognition. In this study, we investigate the patterns of change and the factors involved in the ability to recognize emotion in mild Alzheimer's disease (AD). Through a longitudinal design, we assessed 30 people with AD. We used an experimental task that includes matching expressions with picture stimuli, labelling emotions and emotionally recognizing a stimulus situation. We observed a significant difference in the situational recognition task (p ≤ 0.05) between baseline and the second evaluation. The linear regression showed that cognition is a predictor of emotion recognition impairment (p ≤ 0.05). The ability to perceive emotions from facial expressions was impaired, particularly when the emotions presented were relatively subtle. Cognition is recruited to comprehend emotional situations in cases of mild dementia.

  10. Affective priming using facial expressions modulates liking for abstract art.

    Directory of Open Access Journals (Sweden)

    Albert Flexas

    Full Text Available We examined the influence of affective priming on the appreciation of abstract artworks using an evaluative priming task. Facial primes (showing happiness, disgust or no emotion were presented under brief (Stimulus Onset Asynchrony, SOA = 20 ms and extended (SOA = 300 ms conditions. Differences in aesthetic liking for abstract paintings depending on the emotion expressed in the preceding primes provided a measure of the priming effect. The results showed that, for the extended SOA, artworks were liked more when preceded by happiness primes and less when preceded by disgust primes. Facial expressions of happiness, though not of disgust, exerted similar effects in the brief SOA condition. Subjective measures and a forced-choice task revealed no evidence of prime awareness in the suboptimal condition. Our results are congruent with findings showing that the affective transfer elicited by priming biases evaluative judgments, extending previous research to the domain of aesthetic appreciation.

  11. Affective Priming Using Facial Expressions Modulates Liking for Abstract Art

    Science.gov (United States)

    Flexas, Albert; Rosselló, Jaume; Christensen, Julia F.; Nadal, Marcos; Olivera La Rosa, Antonio; Munar, Enric

    2013-01-01

    We examined the influence of affective priming on the appreciation of abstract artworks using an evaluative priming task. Facial primes (showing happiness, disgust or no emotion) were presented under brief (Stimulus Onset Asynchrony, SOA = 20ms) and extended (SOA = 300ms) conditions. Differences in aesthetic liking for abstract paintings depending on the emotion expressed in the preceding primes provided a measure of the priming effect. The results showed that, for the extended SOA, artworks were liked more when preceded by happiness primes and less when preceded by disgust primes. Facial expressions of happiness, though not of disgust, exerted similar effects in the brief SOA condition. Subjective measures and a forced-choice task revealed no evidence of prime awareness in the suboptimal condition. Our results are congruent with findings showing that the affective transfer elicited by priming biases evaluative judgments, extending previous research to the domain of aesthetic appreciation. PMID:24260350

  12. UX_Mate: from facial expressions to UX evaluation

    DEFF Research Database (Denmark)

    Staiano, Jacopo; Menendez Blanco, Maria; Battocchi, Alberto

    2012-01-01

    In this paper we propose and evaluate UX_Mate, a non-invasive system for the automatic assessment of User eXperience (UX). In addition, we contribute a novel database of annotated and synchronized videos of interactive behavior and facial expressions. UX_Mate is a modular system which tracks facial...... expressions of users, interprets them based on pre-set rules, and generates predictions about the occurrence of a target emotional state, which can be linked to interaction events. The system simplifies UX evaluation providing an indication of event occurrence. UX_Mate has several advantages compared to other...... state of the art systems: easy deployment in the user's natural environment, avoidance of invasive devices, and extreme cost reduction. The paper reports a pilot and a validation study on a total of 46 users, where UX_Mate was used for identifying interaction difficulties. The studies show encouraging...

  13. Affective priming using facial expressions modulates liking for abstract art.

    Science.gov (United States)

    Flexas, Albert; Rosselló, Jaume; Christensen, Julia F; Nadal, Marcos; Olivera La Rosa, Antonio; Munar, Enric

    2013-01-01

    We examined the influence of affective priming on the appreciation of abstract artworks using an evaluative priming task. Facial primes (showing happiness, disgust or no emotion) were presented under brief (Stimulus Onset Asynchrony, SOA = 20 ms) and extended (SOA = 300 ms) conditions. Differences in aesthetic liking for abstract paintings depending on the emotion expressed in the preceding primes provided a measure of the priming effect. The results showed that, for the extended SOA, artworks were liked more when preceded by happiness primes and less when preceded by disgust primes. Facial expressions of happiness, though not of disgust, exerted similar effects in the brief SOA condition. Subjective measures and a forced-choice task revealed no evidence of prime awareness in the suboptimal condition. Our results are congruent with findings showing that the affective transfer elicited by priming biases evaluative judgments, extending previous research to the domain of aesthetic appreciation.

  14. Deficits in the Mimicry of Facial Expressions in Parkinson's Disease.

    Science.gov (United States)

    Livingstone, Steven R; Vezer, Esztella; McGarry, Lucy M; Lang, Anthony E; Russo, Frank A

    2016-01-01

    Humans spontaneously mimic the facial expressions of others, facilitating social interaction. This mimicking behavior may be impaired in individuals with Parkinson's disease, for whom the loss of facial movements is a clinical feature. To assess the presence of facial mimicry in patients with Parkinson's disease. Twenty-seven non-depressed patients with idiopathic Parkinson's disease and 28 age-matched controls had their facial muscles recorded with electromyography while they observed presentations of calm, happy, sad, angry, and fearful emotions. Patients exhibited reduced amplitude and delayed onset in the zygomaticus major muscle region (smiling response) following happy presentations (patients M = 0.02, 95% confidence interval [CI] -0.15 to 0.18, controls M = 0.26, CI 0.14 to 0.37, ANOVA, effect size [ES] = 0.18, p mimicry overall, mimicking other peoples' frowns to some extent, but presenting with profoundly weakened and delayed smiles. These findings open a new avenue of inquiry into the "masked face" syndrome of PD.

  15. Mukha Mo: A Preliminary Study on Filipino Facial Expressions

    Directory of Open Access Journals (Sweden)

    Richard Jonathan O. Taduran

    2012-12-01

    Full Text Available This study tested the universality hypothesis on facial expression judgment by applying cross-cultural agreement tests on Filipinos. The Facial Action Coding System constructed by Ekman and Friesen (1976 was used as basis for creating stimuli photos that 101 college student observers were madeto identify. Contextualization for each emotion was also solicited from subjects to provide qualitative bases for their judgments. The results showed that for five of the six emotions studied, excepting fear, the majority of the observers judged the expressions as predicted. The judgment of happiness supplied the strongest evidence for universality, having the highest correctness rate and inter-observer agreement. There was also high agreement among observersand between Filipinos and other cultures about the most intense and second most intense emotion signaled by each stimulus for these five emotions. Difficulty with the recognition of fear, as well as its common association with the emotion of sadness, has been found. Such findings shall serve as baseline data for the study of facial expressions in the Philippines.

  16. Beyond face value: does involuntary emotional anticipation shape the perception of dynamic facial expressions?

    Directory of Open Access Journals (Sweden)

    Letizia Palumbo

    Full Text Available Emotional facial expressions are immediate indicators of the affective dispositions of others. Recently it has been shown that early stages of social perception can already be influenced by (implicit attributions made by the observer about the agent's mental state and intentions. In the current study possible mechanisms underpinning distortions in the perception of dynamic, ecologically-valid, facial expressions were explored. In four experiments we examined to what extent basic perceptual processes such as contrast/context effects, adaptation and representational momentum underpinned the perceptual distortions, and to what extent 'emotional anticipation', i.e. the involuntary anticipation of the other's emotional state of mind on the basis of the immediate perceptual history, might have played a role. Neutral facial expressions displayed at the end of short video-clips, in which an initial facial expression of joy or anger gradually morphed into a neutral expression, were misjudged as being slightly angry or slightly happy, respectively (Experiment 1. This response bias disappeared when the actor's identity changed in the final neutral expression (Experiment 2. Videos depicting neutral-to-joy-to-neutral and neutral-to-anger-to-neutral sequences again produced biases but in opposite direction (Experiment 3. The bias survived insertion of a 400 ms blank (Experiment 4. These results suggested that the perceptual distortions were not caused by any of the low-level perceptual mechanisms (adaptation, representational momentum and contrast effects. We speculate that especially when presented with dynamic, facial expressions, perceptual distortions occur that reflect 'emotional anticipation' (a low-level mindreading mechanism, which overrules low-level visual mechanisms. Underpinning neural mechanisms are discussed in relation to the current debate on action and emotion understanding.

  17. Beyond face value: does involuntary emotional anticipation shape the perception of dynamic facial expressions?

    Science.gov (United States)

    Palumbo, Letizia; Jellema, Tjeerd

    2013-01-01

    Emotional facial expressions are immediate indicators of the affective dispositions of others. Recently it has been shown that early stages of social perception can already be influenced by (implicit) attributions made by the observer about the agent's mental state and intentions. In the current study possible mechanisms underpinning distortions in the perception of dynamic, ecologically-valid, facial expressions were explored. In four experiments we examined to what extent basic perceptual processes such as contrast/context effects, adaptation and representational momentum underpinned the perceptual distortions, and to what extent 'emotional anticipation', i.e. the involuntary anticipation of the other's emotional state of mind on the basis of the immediate perceptual history, might have played a role. Neutral facial expressions displayed at the end of short video-clips, in which an initial facial expression of joy or anger gradually morphed into a neutral expression, were misjudged as being slightly angry or slightly happy, respectively (Experiment 1). This response bias disappeared when the actor's identity changed in the final neutral expression (Experiment 2). Videos depicting neutral-to-joy-to-neutral and neutral-to-anger-to-neutral sequences again produced biases but in opposite direction (Experiment 3). The bias survived insertion of a 400 ms blank (Experiment 4). These results suggested that the perceptual distortions were not caused by any of the low-level perceptual mechanisms (adaptation, representational momentum and contrast effects). We speculate that especially when presented with dynamic, facial expressions, perceptual distortions occur that reflect 'emotional anticipation' (a low-level mindreading mechanism), which overrules low-level visual mechanisms. Underpinning neural mechanisms are discussed in relation to the current debate on action and emotion understanding.

  18. Role of temporal processing stages by inferior temporal neurons in facial recognition

    Directory of Open Access Journals (Sweden)

    Yasuko eSugase-Miyamoto

    2011-06-01

    Full Text Available In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses.In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of

  19. The recognition of facial expressions of emotion in Alzheimer's disease: a review of findings.

    Science.gov (United States)

    McLellan, Tracey; Johnston, Lucy; Dalrymple-Alford, John; Porter, Richard

    2008-10-01

    To provide a selective review of the literature on the recognition of facial expressions of emotion in Alzheimer's disease (AD), to evaluate whether these patients show variation in their ability to recognise different emotions and whether any such impairments are instead because of a general decline in cognition. A narrative review based on relevant articles identified from PubMed and PsycInfo searches from 1987 to 2007 using keywords 'Alzheimer's', 'facial expression recognition', 'dementia' and 'emotion processing'. Although the literature is as yet limited, with several methodological inconsistencies, AD patients show poorer recognition of facial expressions, with particular difficulty with sad expressions. It is unclear whether poorer performance reflects the general cognitive decline and/or verbal or spatial deficits associated with AD or whether the deficits reflect specific neuropathology. This under-represented field of study may help to extend our understanding of social functioning in AD. Future work requires more detailed analyses of ancillary cognitive measures, more ecologically valid facial displays of emotion and a reference situation that more closely approximates an actual social interaction.

  20. Hemodynamic response of children with attention-deficit and hyperactive disorder (ADHD) to emotional facial expressions.

    Science.gov (United States)

    Ichikawa, Hiroko; Nakato, Emi; Kanazawa, So; Shimamura, Keiichi; Sakuta, Yuiko; Sakuta, Ryoichi; Yamaguchi, Masami K; Kakigi, Ryusuke

    2014-10-01

    Children with attention-deficit/hyperactivity disorder (ADHD) have difficulty recognizing facial expressions. They identify angry expressions less accurately than typically developing (TD) children, yet little is known about their atypical neural basis for the recognition of facial expressions. Here, we used near-infrared spectroscopy (NIRS) to examine the distinctive cerebral hemodynamics of ADHD and TD children while they viewed happy and angry expressions. We measured the hemodynamic responses of 13 ADHD boys and 13 TD boys to happy and angry expressions at their bilateral temporal areas, which are sensitive to face processing. The ADHD children showed an increased concentration of oxy-Hb for happy faces but not for angry faces, while TD children showed increased oxy-Hb for both faces. Moreover, the individual peak latency of hemodynamic response in the right temporal area showed significantly greater variance in the ADHD group than in the TD group. Such atypical brain activity observed in ADHD boys may relate to their preserved ability to recognize a happy expression and their difficulty recognizing an angry expression. We firstly demonstrated that NIRS can be used to detect atypical hemodynamic response to facial expressions in ADHD children.

  1. Inferring Player Experiences Using Facial Expressions Analysis

    NARCIS (Netherlands)

    Tan, C.T.; Bakkes, S.; Pisan, Y.; Blackmore, K.; Nesbitt, K.; Smith, S.P.

    2014-01-01

    Understanding player experiences is central to game design. Video captures of players is a common practice for obtaining rich reviewable data for analysing these experiences. However, not enough has been done in investigating ways of preprocessing the video for a more efficient analysis process. Thi

  2. Active AU Based Patch Weighting for Facial Expression Recognition

    Science.gov (United States)

    Xie, Weicheng; Shen, Linlin; Yang, Meng; Lai, Zhihui

    2017-01-01

    Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU) weighting and patch weight optimization is proposed to represent the specificity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+) databases, respectively. Better cross-database performance has also been observed. PMID:28146094

  3. Active AU Based Patch Weighting for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Weicheng Xie

    2017-01-01

    Full Text Available Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU weighting and patch weight optimization is proposed to represent the specificity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+ databases, respectively. Better cross-database performance has also been observed.

  4. Face Processing and Facial Emotion Recognition in Adults with Down Syndrome

    Science.gov (United States)

    Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial

    2008-01-01

    Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…

  5. Face Processing and Facial Emotion Recognition in Adults with Down Syndrome

    Science.gov (United States)

    Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial

    2008-01-01

    Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…

  6. Facial Emotion Processing in Aviremic HIV-infected Adults.

    Science.gov (United States)

    González-Baeza, A; Carvajal, F; Bayón, C; Pérez-Valero, I; Montes-Ramírez, M; Arribas, J R

    2016-08-01

    The emotional processing in human immunodeficiency virus-seropositive individuals (HIV+) has been scarcely studied. We included HIV+ individuals (n = 107) on antiretroviral therapy (≥2 years) who completed 6 facial processing tasks and neurocognitive testing. We compared HIV+ and healthy adult (HA) participants (n = 40) in overall performance of each facial processing task. Multiple logistic regressions were conducted to explore predictors of poorer accuracy in those measures in which HIV+ individuals performed poorer than HA participants. We separately explored the impact of neurocognitive status, antiretroviral regimen, and hepatitis C virus (HCV) coinfection on the tasks performance. We found similar performance in overall facial emotion discrimination, recognition, and recall between HIV+ and HA participants. The HIV+ group had poorer recognition of particular negative emotions. Lower WAIS-III Vocabulary scores and active HCV predicted poorer accuracy in recognition of particular emotions. Our results suggest that permanent damage of emotion-related brain systems might persist despite long-term effective antiretroviral therapy.

  7. Comparing the Recognition of Emotional Facial Expressions in Patients with

    Directory of Open Access Journals (Sweden)

    Abdollah Ghasempour

    2014-05-01

    Full Text Available Background: Recognition of emotional facial expressions is one of the psychological factors which involve in obsessive-compulsive disorder (OCD and major depressive disorder (MDD. The aim of present study was to compare the ability of recognizing emotional facial expressions in patients with Obsessive-Compulsive Disorder and major depressive disorder. Materials and Methods: The present study is a cross-sectional and ex-post facto investigation (causal-comparative method. Forty participants (20 patients with OCD, 20 patients with MDD were selected through available sampling method from the clients referred to Tabriz Bozorgmehr clinic. Data were collected through Structured Clinical Interview and Recognition of Emotional Facial States test. The data were analyzed utilizing MANOVA. Results: The obtained results showed that there is no significant difference between groups in the mean score of recognition emotional states of surprise, sadness, happiness and fear; but groups had a significant difference in the mean score of diagnosing disgust and anger states (p<0.05. Conclusion: Patients suffering from both OCD and MDD show equal ability to recognize surprise, sadness, happiness and fear. However, the former are less competent in recognizing disgust and anger than the latter.

  8. Body actions change the appearance of facial expressions.

    Directory of Open Access Journals (Sweden)

    Carlo Fantoni

    Full Text Available Perception, cognition, and emotion do not operate along segregated pathways; rather, their adaptive interaction is supported by various sources of evidence. For instance, the aesthetic appraisal of powerful mood inducers like music can bias the facial expression of emotions towards mood congruency. In four experiments we showed similar mood-congruency effects elicited by the comfort/discomfort of body actions. Using a novel Motor Action Mood Induction Procedure, we let participants perform comfortable/uncomfortable visually-guided reaches and tested them in a facial emotion identification task. Through the alleged mediation of motor action induced mood, action comfort enhanced the quality of the participant's global experience (a neutral face appeared happy and a slightly angry face neutral, while action discomfort made a neutral face appear angry and a slightly happy face neutral. Furthermore, uncomfortable (but not comfortable reaching improved the sensitivity for the identification of emotional faces and reduced the identification time of facial expressions, as a possible effect of hyper-arousal from an unpleasant bodily experience.

  9. Laterality of Facial Expressions of Emotion: Universal and Culture-Specific Influences

    OpenAIRE

    Manas K. Mandal; Nalini Ambady

    2004-01-01

    Recent research indicates that (a) the perception and expression of facial emotion are lateralized to a great extent in the right hemisphere, and, (b) whereas facial expressions of emotion embody universal signals, culture-specific learning moderates the expression and interpretation of these emotions. In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others ar...

  10. French-speaking children’s freely produced labels for facial expressions

    OpenAIRE

    Reem eMaassarani; Pierre eGosselin; Patricia eMontembeault; Mathieu eGagnon

    2014-01-01

    In this study, we investigated the labeling of facial expressions in French-speaking children. The participants were 137 French-speaking children, between the ages of 5 and 11 years, recruited from three elementary schools in Ottawa, Ontario, Canada. The facial expressions included expressions of happiness, sadness, fear, surprise, anger, and disgust. Participants were shown one facial expression at a time, and asked to say what the stimulus person was feeling. Participants’ responses were co...

  11. Facial emotion processing in borderline personality disorder: a systematic review and meta-analysis.

    Science.gov (United States)

    Mitchell, Amy E; Dickens, Geoffrey L; Picchioni, Marco M

    2014-06-01

    A body of work has developed over the last 20 years that explores facial emotion perception in Borderline Personality Disorder (BPD). We identified 25 behavioural and functional imaging studies that tested facial emotion processing differences between patients with BPD and healthy controls through a database literature search. Despite methodological differences there is consistent evidence supporting a negative response bias to neutral and ambiguous facial expressions in patients. Findings for negative emotions are mixed with evidence from individual studies of an enhanced sensitivity to fearful expressions and impaired facial emotion recognition of disgust, while meta-analysis revealed no significant recognition impairments between BPD and healthy controls for any negative emotion. Mentalizing studies indicate that BPD patients are accurate at attributing mental states to complex social stimuli. Functional neuroimaging data suggest that the underlying neural substrate involves hyperactivation in the amygdala to affective facial stimuli, and altered activation in the anterior cingulate, inferior frontal gyrus and the superior temporal sulcus particularly during social emotion processing tasks. Future studies must address methodological inconsistencies, particularly variations in patients' key clinical characteristics and in the testing paradigms deployed.

  12. The Effect of Sad Facial Expressions on Weight Judgment

    Directory of Open Access Journals (Sweden)

    Trent D Weston

    2015-04-01

    Full Text Available Although the body weight evaluation (e.g., normal or overweight of others relies on perceptual impressions, it also can be influenced by other psychosocial factors. In this study, we explored the effect of task-irrelevant emotional facial expressions on judgments of body weight and the relationship between emotion-induced weight judgment bias and other psychosocial variables including attitudes towards obese person. Forty-four participants were asked to quickly make binary body weight decisions for 960 randomized sad and neutral faces of varying weight levels presented on a computer screen. The results showed that sad facial expressions systematically decreased the decision threshold of overweight judgments for male faces. This perceptual decision bias by emotional expressions was positively correlated with the belief that being overweight is not under the control of obese persons. Our results provide experimental evidence that task-irrelevant emotional expressions can systematically change the decision threshold for weight judgments, demonstrating that sad expressions can make faces appear more overweight than they would otherwise be judged.

  13. Treatment with escitalopram improves the attentional bias toward negative facial expressions in patients with major depressive disorders.

    Science.gov (United States)

    Zhou, Zhenhe; Cao, Suxia; Li, Hengfen; Li, Youhui

    2015-10-01

    We hypothesized that treatment with escitalopram would improve cognitive bias and contribute to the recovery process for patients with major depressive disorder (MDD). Many previous studies have established that patients with MDD tend to pay selective attention to negative stimuli. The assessment of the level of cognitive bias is regarded as a crucial dimension of treatment outcomes for MDD. To our knowledge, no prior studies have been reported on the effects of treatment with escitalopram on attentional bias in MDD, employing a dot probe task of facial expression. We studied 25 patients with MDD and 25 controls, and used a dot probe task of facial expression to measure cognitive bias. The patients' psychopathologies were rated using the Hamilton Depression Scale (HAMD) at baseline and after 8 weeks of treatment with escitalopram. All participants performed the facial expression dot probe task. The results revealed that the 8 week escitalopram treatment decreased the HAMD scores. The patients with MDD at baseline exhibited an attentional bias towards negative faces, however, no significant bias toward either negative or happy faces were observed in the controls. After the 8 week escitalopram treatment, no significant bias toward negative faces was observed in the patient group. In conclusion, patients with MDD pay more attention to negative facial expressions, and treatment with escitalopram improves this attentional bias toward negative facial expressions. This is the first study, to our knowledge, on the effects of treatment with escitalopram on attentional bias in patients with MDD that has employed a dot probe task of facial expression.

  14. Analysis of facial expressions in parkinson's disease through video-based automatic methods.

    Science.gov (United States)

    Bandini, Andrea; Orlandi, Silvia; Escalante, Hugo Jair; Giovannelli, Fabio; Cincotta, Massimo; Reyes-Garcia, Carlos A; Vanni, Paola; Zaccara, Gaetano; Manfredi, Claudia

    2017-04-01

    The automatic analysis of facial expressions is an evolving field that finds several clinical applications. One of these applications is the study of facial bradykinesia in Parkinson's disease (PD), which is a major motor sign of this neurodegenerative illness. Facial bradykinesia consists in the reduction/loss of facial movements and emotional facial expressions called hypomimia. In this work we propose an automatic method for studying facial expressions in PD patients relying on video-based METHODS: 17 Parkinsonian patients and 17 healthy control subjects were asked to show basic facial expressions, upon request of the clinician and after the imitation of a visual cue on a screen. Through an existing face tracker, the Euclidean distance of the facial model from a neutral baseline was computed in order to quantify the changes in facial expressivity during the tasks. Moreover, an automatic facial expressions recognition algorithm was trained in order to study how PD expressions differed from the standard expressions. Results show that control subjects reported on average higher distances than PD patients along the tasks. This confirms that control subjects show larger movements during both posed and imitated facial expressions. Moreover, our results demonstrate that anger and disgust are the two most impaired expressions in PD patients. Contactless video-based systems can be important techniques for analyzing facial expressions also in rehabilitation, in particular speech therapy, where patients could get a definite advantage from a real-time feedback about the proper facial expressions/movements to perform. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Misinterpretation of Facial Expressions of Emotion in Verbal Adults with Autism Spectrum Disorder

    Science.gov (United States)

    Eack, Shaun M.; Mazefsky, Carla A.; Minshew, Nancy J.

    2015-01-01

    Facial emotion perception is significantly affected in autism spectrum disorder, yet little is known about how individuals with autism spectrum disorder misinterpret facial expressions that result in their difficulty in accurately recognizing emotion in faces. This study examined facial emotion perception in 45 verbal adults with autism spectrum…

  16. Electromyographic responses to emotional facial expressions in 6-7 year olds: A feasibility study

    NARCIS (Netherlands)

    Deschamps, P.K.H.; Schutte, I.; Kenemans, J.L.; Matthys, W.C.H.J.; Schutter, D.J.L.G.

    2012-01-01

    Preliminary studies have demonstrated that school-aged children (average age 9-10 years) show mimicry responses to happy and angry facial expressions. The aim of the present study was to assess the feasibility of using facial electromyography (EMG) as a method to study facial mimicry responses in

  17. Emotion unfolded by motion: a role for parietal lobe in decoding dynamic facial expressions.

    Science.gov (United States)

    Sarkheil, Pegah; Goebel, Rainer; Schneider, Frank; Mathiak, Klaus

    2013-12-01

    Facial expressions convey important emotional and social information and are frequently applied in investigations of human affective processing. Dynamic faces may provide higher ecological validity to examine perceptual and cognitive processing of facial expressions. Higher order processing of emotional faces was addressed by varying the task and virtual face models systematically. Blood oxygenation level-dependent activation was assessed using functional magnetic resonance imaging in 20 healthy volunteers while viewing and evaluating either emotion or gender intensity of dynamic face stimuli. A general linear model analysis revealed that high valence activated a network of motion-responsive areas, indicating that visual motion areas support perceptual coding for the motion-based intensity of facial expressions. The comparison of emotion with gender discrimination task revealed increased activation of inferior parietal lobule, which highlights the involvement of parietal areas in processing of high level features of faces. Dynamic emotional stimuli may help to emphasize functions of the hypothesized 'extended' over the 'core' system for face processing.

  18. Modèles de choix discrets pour la reconnaissance des expressions faciales statiques

    OpenAIRE

    Danalet, Antonin

    2007-01-01

    Ce projet de semestre présente l’utilisation des modèles de choix discret pour construire un modèle de perception des expressions faciales statiques potentiellement utilisable pour la reconnaissance et la classification de ces expressions. La description de ces expressions s’inspire du système Facial Action Coding System (FACS) de Paul Ekman, basé sur une analyse anatomique de l’action faciale. L’ensemble de choix contient 6 expressions faciales universelles plus l’expression neutre. Chaque a...

  19. Responses in the right posterior superior temporal sulcus show a feature-based response to facial expression.

    Science.gov (United States)

    Flack, Tessa R; Andrews, Timothy J; Hymers, Mark; Al-Mosaiwi, Mohammed; Marsden, Samuel P; Strachan, James W A; Trakulpipat, Chayanit; Wang, Liang; Wu, Tian; Young, Andrew W

    2015-08-01

    The face-selective region of the right posterior superior temporal sulcus (pSTS) plays an important role in analysing facial expressions. However, it is less clear how facial expressions are represented in this region. In this study, we used the face composite effect to explore whether the pSTS contains a holistic or feature-based representation of facial expression. Aligned and misaligned composite images were created from the top and bottom halves of faces posing different expressions. In Experiment 1, participants performed a behavioural matching task in which they judged whether the top half of two images was the same or different. The ability to discriminate the top half of the face was affected by changes in the bottom half of the face when the images were aligned, but not when they were misaligned. This shows a holistic behavioural response to expression. In Experiment 2, we used fMR-adaptation to ask whether the pSTS has a corresponding holistic neural representation of expression. Aligned or misaligned images were presented in blocks that involved repeating the same image or in which the top or bottom half of the images changed. Increased neural responses were found in the right pSTS regardless of whether the change occurred in the top or bottom of the image, showing that changes in expression were detected across all parts of the face. However, in contrast to the behavioural data, the pattern did not differ between aligned and misaligned stimuli. This suggests that the pSTS does not encode facial expressions holistically. In contrast to the pSTS, a holistic pattern of response to facial expression was found in the right inferior frontal gyrus (IFG). Together, these results suggest that pSTS reflects an early stage in the processing of facial expression in which facial features are represented independently.

  20. Exploring the nature of facial affect processing deficits in schizophrenia.

    NARCIS (Netherlands)

    Wout, M. van 't; Aleman, A.; Kessels, R.P.C.; Cahn, W.; Haan, E.H.F. de; Kahn, R.S.

    2007-01-01

    Schizophrenia has been associated with deficits in facial affect processing, especially negative emotions. However, the exact nature of the deficit remains unclear. The aim of the present study was to investigate whether schizophrenia patients have problems in automatic allocation of attention as we

  1. Exploring the nature of facial affect processing deficits in schizophrenia

    NARCIS (Netherlands)

    Wout, Mascha van 't; Aleman, Andre; Kessels, Roy P. C.; Cahn, Wiepke; Haan, Edward H. F. de; Kahn, Rene S.

    2007-01-01

    Schizophrenia has been associated with deficits in facial affect processing, especially negative emotions. However, the exact nature of the deficit remains unclear. The aim of the present study was to investigate whether schizophrenia patients have problems in automatic allocation of attention as we

  2. Holistic face processing can inhibit recognition of forensic facial composites.

    Science.gov (United States)

    McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H

    2016-04-01

    Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format.

  3. Gender Differences in the Motivational Processing of Facial Beauty

    Science.gov (United States)

    Levy, Boaz; Ariely, Dan; Mazar, Nina; Chi, Won; Lukas, Scott; Elman, Igor

    2008-01-01

    Gender may be involved in the motivational processing of facial beauty. This study applied a behavioral probe, known to activate brain motivational regions, to healthy heterosexual subjects. Matched samples of men and women were administered two tasks: (a) key pressing to change the viewing time of average or beautiful female or male facial…

  4. Schizophrenia and processing of facial emotions : Sex matters

    NARCIS (Netherlands)

    Scholten, MRM; Aleman, A; Montagne, B; Kahn, RS

    2005-01-01

    The aim of this study was to examine sex differences in emotion processing in patients with schizophrenia and control subjects. To this end, 53 patients with schizophrenia (28 men and 25 women), and 42 controls (21 men and 21 women) were assessed with the use of a facial affect recognition morphing

  5. Personality Trait and Facial Expression Filter-Based Brain-Computer Interface

    OpenAIRE

    Seongah Chin; Chung-Yeon Lee

    2013-01-01

    In this paper, we present technical approaches that bridge the gap in the research related to the use of brain‐computer interfaces for entertainment and facial expressions. Such facial expressions that reflect an individual’s personal traits can be used to better realize artificial facial expressions in a gaming environment based on a brain‐computer interface. First, an emotion extraction filter is introduced in order to classify emotions on the basis of the users’ brain signals in real time....

  6. Facial expression recognition based on improved local ternary pattern and stacked auto-encoder

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.

  7. Human Empathy, Personality and Experience Affect the Emotion Ratings of Dog and Human Facial Expressions

    OpenAIRE

    Kujala, Miiamaaria V.; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri

    2017-01-01

    Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people’s perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggr...

  8. Association between facial expression and PTSD symptoms among young children exposed to the Great East Japan Earthquake: A pilot study

    Directory of Open Access Journals (Sweden)

    Takeo eFujiwara

    2015-10-01

    Full Text Available Emotional numbing is a symptom of post-traumatic stress disorder (PTSD characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip, and to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent’s Report of the Child’s Reaction to Stress scale. Children were filmed while watching a 2-minute video compilation of natural scenes (‘baseline video’ followed by a 2-minute video clip from a television comedy (‘comedy video’. Children’s facial expressions were processed using Noldus FaceReader software, which implements the Facial Action Coding System (FACS. We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age and baseline facial expression (p < .05. This pilot study suggests that facial emotion reactivity could provide an index against which emotional numbing could be measured in young children, using facial expression recognition software. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children’s reactions to disasters.

  9. Face recognition using facial expression: a novel approach

    Science.gov (United States)

    Singh, Deepak Kumar; Gupta, Priya; Tiwary, U. S.

    2008-04-01

    Facial expressions are undoubtedly the most effective nonverbal communication. The face has always been the equation of a person's identity. The face draws the demarcation line between identity and extinction. Each line on the face adds an attribute to the identity. These lines become prominent when we experience an emotion and these lines do not change completely with age. In this paper we have proposed a new technique for face recognition which focuses on the facial expressions of the subject to identify his face. This is a grey area on which not much light has been thrown earlier. According to earlier researches it is difficult to alter the natural expression. So our technique will be beneficial for identifying occluded or intentionally disguised faces. The test results of the experiments conducted prove that this technique will give a new direction in the field of face recognition. This technique will provide a strong base to the area of face recognition and will be used as the core method for critical defense security related issues.

  10. Exaggerated perception of facial expressions is increased in individuals with schizotypal traits

    Science.gov (United States)

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2015-01-01

    Emotional facial expressions are indispensable communicative tools, and social interactions involving facial expressions are impaired in some psychiatric disorders. Recent studies revealed that the perception of dynamic facial expressions was exaggerated in normal participants, and this exaggerated perception is weakened in autism spectrum disorder (ASD). Based on the notion that ASD and schizophrenia spectrum disorder are at two extremes of the continuum with respect to social impairment, we hypothesized that schizophrenic characteristics would strengthen the exaggerated perception of dynamic facial expressions. To test this hypothesis, we investigated the relationship between the perception of facial expressions and schizotypal traits in a normal population. We presented dynamic and static facial expressions, and asked participants to change an emotional face display to match the perceived final image. The presence of schizotypal traits was positively correlated with the degree of exaggeration for dynamic, as well as static, facial expressions. Among its subscales, the paranoia trait was positively correlated with the exaggerated perception of facial expressions. These results suggest that schizotypal traits, specifically the tendency to over-attribute mental states to others, exaggerate the perception of emotional facial expressions. PMID:26135081

  11. Memory for facial expression is influenced by the background music playing during study

    National Research Council Canada - National Science Library

    Woloszyn, Michael R; Ewert, Laura

    2012-01-01

    .... Although memory for faces was very accurate, emotionally incongruent background music biased subsequent memory for facial expressions, increasing the likelihood that happy faces were recalled as sad...

  12. Face to face: blocking facial mimicry can selectively impair recognition of emotional expressions.

    Science.gov (United States)

    Oberman, Lindsay M; Winkielman, Piotr; Ramachandran, Vilayanur S

    2007-01-01

    People spontaneously mimic a variety of behaviors, including emotional facial expressions. Embodied cognition theories suggest that mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. If so, blocking facial mimicry should impair recognition of expressions, especially of emotions that are simulated using facial musculature. The current research tested this hypothesis using four expressions (happy, disgust, fear, and sad) and two mimicry-interfering manipulations (1) biting on a pen and (2) chewing gum, as well as two control conditions. Experiment 1 used electromyography over cheek, mouth, and nose regions. The bite manipulation consistently activated assessed muscles, whereas the chew manipulation activated muscles only intermittently. Further, expressing happiness generated most facial action. Experiment 2 found that the bite manipulation interfered most with recognition of happiness. These findings suggest that facial mimicry differentially contributes to recognition of specific facial expressions, thus allowing for more refined predictions from embodied cognition theories.

  13. Facial expression recognition and emotional regulation in narcolepsy with cataplexy.

    Science.gov (United States)

    Bayard, Sophie; Croisier Langenier, Muriel; Dauvilliers, Yves

    2013-04-01

    Cataplexy is pathognomonic of narcolepsy with cataplexy, and defined by a transient loss of muscle tone triggered by strong emotions. Recent researches suggest abnormal amygdala function in narcolepsy with cataplexy. Emotion treatment and emotional regulation strategies are complex functions involving cortical and limbic structures, like the amygdala. As the amygdala has been shown to play a role in facial emotion recognition, we tested the hypothesis that patients with narcolepsy with cataplexy would have impaired recognition of facial emotional expressions compared with patients affected with central hypersomnia without cataplexy and healthy controls. We also aimed to determine whether cataplexy modulates emotional regulation strategies. Emotional intensity, arousal and valence ratings on Ekman faces displaying happiness, surprise, fear, anger, disgust, sadness and neutral expressions of 21 drug-free patients with narcolepsy with cataplexy were compared with 23 drug-free sex-, age- and intellectual level-matched adult patients with hypersomnia without cataplexy and 21 healthy controls. All participants underwent polysomnography recording and multiple sleep latency tests, and completed depression, anxiety and emotional regulation questionnaires. Performance of patients with narcolepsy with cataplexy did not differ from patients with hypersomnia without cataplexy or healthy controls on both intensity rating of each emotion on its prototypical label and mean ratings for valence and arousal. Moreover, patients with narcolepsy with cataplexy did not use different emotional regulation strategies. The level of depressive and anxious symptoms in narcolepsy with cataplexy did not differ from the other groups. Our results demonstrate that narcolepsy with cataplexy accurately perceives and discriminates facial emotions, and regulates emotions normally. The absence of alteration of perceived affective valence remains a major clinical interest in narcolepsy with cataplexy

  14. View-constrained latent variable model for multi-view facial expression classification

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2014-01-01

    We propose a view-constrained latent variable model for multi-view facial expression classification. In this model, we first learn a discriminative manifold shared by multiple views of facial expressions, followed by the expression classification in the shared manifold. For learning, we use the expr

  15. Personality influences the neural responses to viewing facial expressions of emotion.

    Science.gov (United States)

    Calder, Andrew J; Ewbank, Michael; Passamonti, Luca

    2011-06-12

    Cognitive research has long been aware of the relationship between individual differences in personality and performance on behavioural tasks. However, within the field of cognitive neuroscience, the way in which such differences manifest at a neural level has received relatively little attention. We review recent research addressing the relationship between personality traits and the neural response to viewing facial signals of emotion. In one section, we discuss work demonstrating the relationship between anxiety and the amygdala response to facial signals of threat. A second section considers research showing that individual differences in reward drive (behavioural activation system), a trait linked to aggression, influence the neural responsivity and connectivity between brain regions implicated in aggression when viewing facial signals of anger. Finally, we address recent criticisms of the correlational approach to fMRI analyses and conclude that when used appropriately, analyses examining the relationship between personality and brain activity provide a useful tool for understanding the neural basis of facial expression processing and emotion processing in general.

  16. Feature Fusion Algorithm for Multimodal Emotion Recognition from Speech and Facial Expression Signal

    Directory of Open Access Journals (Sweden)

    Han Zhiyan

    2016-01-01

    Full Text Available In order to overcome the limitation of single mode emotion recognition. This paper describes a novel multimodal emotion recognition algorithm, and takes speech signal and facial expression signal as the research subjects. First, fuse the speech signal feature and facial expression signal feature, get sample sets by putting back sampling, and then get classifiers by BP neural network (BPNN. Second, measure the difference between two classifiers by double error difference selection strategy. Finally, get the final recognition result by the majority voting rule. Experiments show the method improves the accuracy of emotion recognition by giving full play to the advantages of decision level fusion and feature level fusion, and makes the whole fusion process close to human emotion recognition more, with a recognition rate 90.4%.

  17. Towards Emotion Detection in Educational Scenarios from Facial Expressions and Body Movements through Multimodal Approaches

    Directory of Open Access Journals (Sweden)

    Mar Saneiro

    2014-01-01

    Full Text Available We report current findings when considering video recordings of facial expressions and body movements to provide affective personalized support in an educational context from an enriched multimodal emotion detection approach. In particular, we describe an annotation methodology to tag facial expression and body movements that conform to changes in the affective states of learners while dealing with cognitive tasks in a learning process. The ultimate goal is to combine these annotations with additional affective information collected during experimental learning sessions from different sources such as qualitative, self-reported, physiological, and behavioral information. These data altogether are to train data mining algorithms that serve to automatically identify changes in the learners’ affective states when dealing with cognitive tasks which help to provide emotional personalized support.

  18. The dynamic rotation of Langer's lines on facial expression.

    Science.gov (United States)

    Bush, James; Ferguson, Mark W J; Mason, Tracey; McGrouther, Gus

    2007-01-01

    Karl Langer investigated directional variations in the mechanical and physical properties of skin [Gibson T. Editorial. Karl Langer (1819-1887) and his lines. Br J Plast Surg 1978;31:1-2]. He produced a series of diagrams depicting lines of cleavage in the skin [Langer K. On the anatomy and physiology of the skin I. The cleavability of the cutis. Br J Plast Surg 1978;31:3-8] and showed that the orientation of these lines coincided with the dominant axis of mechanical tension in the skin [Langer K. On the anatomy and physiology of the skin II. Skin tension. Br J Plast Surg 1978;31:93-106]. Previously these lines have been considered as a static feature. We set out to determine whether Langer's lines have a dynamic element and to define any rotation of the orientation of Langer's lines on the face with facial movement. One hundred and seventy-five naevi were excised from the face and neck of 72 volunteers using circular dermal punch biopsies. Prior to surgery a vertical line was marked on the skin through the centre of each naevus. After excision distortions of the resulting wounds were observed. The orientation of the long axis of each wound, in relation to the previously marked vertical line, was measured with a goniometer with the volunteer at rest and holding their face in five standardised facial expressions: mouth open, smiling, eyes tightly shut, frowning and eyebrows raised. The aim was to measure the orientation of the long axis of the wound with the face at rest and subsequent rotation of the wound with facial movement. After excision elliptical distortion was seen in 171 of the 175 wounds at rest. Twenty-nine wounds maintained the same orientation of distortion in all of the facial expressions. In the remaining wounds the long axis of the wound rotated by up to 90 degrees . The amount of rotation varied between sites (p>0.0001). We conclude that Langer's lines are not a static feature but are dynamic with rotation of up to 90 degrees . It is possible that

  19. Effects of gaze direction, head orientation and valence of facial expression on amygdala activity.

    Science.gov (United States)

    Sauer, Andreas; Mothes-Lasch, Martin; Miltner, Wolfgang H R; Straube, Thomas

    2014-08-01

    There is increasing evidence for a role of the amygdala in processing gaze direction and emotional relevance of faces. In this event-related functional magnetic resonance study we investigated amygdala responses while we orthogonally manipulated head direction, gaze direction and facial expression (angry, happy and neutral). This allowed us to investigate effects of stimulus ambiguity, low-level factors and non-emotional factors on amygdala activation. Averted vs direct gaze induced increased activation in the right dorsal amygdala regardless of facial expression and head orientation. Furthermore, valence effects were found in the ventral amygdala and strongly dependent on head orientation. We observed enhanced activation to angry and neutral vs happy faces for observer-directed faces in the left ventral amygdala while the averted head condition reversed this pattern resulting in increased activation to happy as compared to angry and neutral faces. These results suggest that gaze direction drives specifically dorsal amygdala activation regardless of facial expression, low-level perceptual factors or stimulus ambiguity. The role of the amygdala is thus not restricted to the detection of potential threat, but has a more general role in attention processes. Furthermore, valence effects are associated with activation of the ventral amygdala and strongly influenced by non-emotional factors.

  20. Perception of emotional facial expressions in individuals with high Autism-spectrum Quotient (AQ

    Directory of Open Access Journals (Sweden)

    Ervin Poljac

    2012-10-01

    Full Text Available Autism is characterized by difficulties in social interaction, communication, restrictive and repetitive behaviours and specific impairments in emotional processing. The present study employed The Autism Spectrum Quotient (Baron-Cohen et al. 2006 to quantify autistic traits in a group of 260 healthy individuals and to investigate whether this measure is related to the perception of facial emotional expressions. The emotional processing of twelve participants that scored significantly higher than the average on the AQ was compared to twelve participants with significantly lower AQ scores. Perception of emotional expressions was estimated by The Facial Recognition Task (Montagne et al. 2007. There were significant differences between the two groups with regard to accuracy and sensitivity of the perception of emotional facial expressions. Specifically, the group with high AQ score was less accurate and needed higher emotional content to recognize emotions of anger, disgust, happiness and sadness. This result implies a selective impairment that might be helpful in understanding the psychopathology of autism spectrum disorders.

  1. Recognition of static and dynamic facial expressions: Influences of sex, type and intensity of emotion

    OpenAIRE

    2013-01-01

    Ecological validity of static and intense facial expressions in emotional recognition has been questioned. Recent studies have recommended the use of facial stimuli more compatible to the natural conditions of social interaction, which involves motion and variations in emotional intensity. In this study, we compared the recognition of static and dynamic facial expressions of happiness, fear, anger and sadness, presented in four emotional intensities (25 %, 50 %, 75 % and 100 %). Twenty volunt...

  2. Facial expression of positive emotions in individuals with eating disorders.

    Science.gov (United States)

    Dapelo, Marcela M; Hart, Sharon; Hale, Christiane; Morris, Robin; Lynch, Thomas R; Tchanturia, Kate

    2015-11-30

    A large body of research has associated Eating Disorders with difficulties in socio-emotional functioning and it has been argued that they may serve to maintain the illness. This study aimed to explore facial expressions of positive emotions in individuals with Anorexia Nervosa (AN) and Bulimia Nervosa (BN) compared to healthy controls (HC), through an examination of the Duchenne smile (DS), which has been associated with feelings of enjoyment, amusement and happiness (Ekman et al., 1990). Sixty participants (AN=20; BN=20; HC=20) were videotaped while watching a humorous film clip. The duration and intensity of DS were subsequently analyzed using the facial action coding system (FACS) (Ekman and Friesen, 2003). Participants with AN displayed DS for shorter durations than BN and HC participants, and their DS had lower intensity. In the clinical groups, lower duration and intensity of DS were associated with lower BMI, and use of psychotropic medication. The study is the first to explore DS in people with eating disorders, providing further evidence of difficulties in the socio-emotional domain in people with AN. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. Nasal-Temporal Asymmetries in Suprathreshold Facial Expressions of Emotion

    Directory of Open Access Journals (Sweden)

    I Ntonia

    2014-08-01

    Full Text Available In this study, we investigated nasal-temporal asymmetries resulting from exposure to visible facial expressions of emotion. The literature has so far reported attentional asymmetries on spatial perception, and nasal-temporal asymmetries resulting from backward masked visual stimuli activated through non-conscious perception. To our knowledge however, no attempt has been made to test such asymmetries with un-masked, consciously visible emotional stimuli. Here, we report on response differences in binocular and monocular viewing on the perception of visible facial expressions of affect. In Study 1, 24 right-handed adults completed a speeded forced-choice paradigm, in which they binocularly viewed bilateral displays of a neutral face in one hemifield, simultaneously paired either with a happy or angry face of variable emotional salience in the opposite hemifield, for 50ms; the task was to indicate the left-right location of the emotional face. In Study 2 (N=23, right-handed, participants completed the same paradigm mononocularly while alternately patching either left and right eye in successive blocks. Under binocular viewing, we found an overall advantage for localising happy faces, further intensified when displayed in the right visual hemifield, and evident even when the emotional expression was extremely subtle. For monocularly viewed stimuli, we observed a response latency asymmetry with faster responses to temporally viewed happy faces. However there was higher overall accuracy for nasal stimuli regardless of emotion, further intensified when the emotional face appeared on the left. Our findings show that the nasal-temporal asymmetries previously associated exclusively with the non-conscious perception of emotional stimuli, manifest as ‘trigger-happy’ responses when emotional stimuli become visible. This asymmetrical reduction in response latency for nasal stimuli could be attributed an overall attentional bias towards the temporal area

  4. The mysterious noh mask: contribution of multiple facial parts to the recognition of emotional expressions.

    Directory of Open Access Journals (Sweden)

    Hiromitsu Miyata

    Full Text Available BACKGROUND: A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. METHODOLOGY/PRINCIPAL FINDINGS: In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. CONCLUSIONS/SIGNIFICANCE: The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the

  5. Unobtrusive multimodal emotion detection in adaptive interfaces: speech and facial expressions

    NARCIS (Netherlands)

    Truong, K.P.; Leeuwen, D.A. van; Neerincx, M.A.

    2007-01-01

    Two unobtrusive modalities for automatic emotion recognition are discussed: speech and facial expressions. First, an overview is given of emotion recognition studies based on a combination of speech and facial expressions. We will identify difficulties concerning data collection, data fusion, system

  6. Recognition of Facial Expressions and Prosodic Cues with Graded Emotional Intensities in Adults with Asperger Syndrome

    Science.gov (United States)

    Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-01-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…

  7. Unobtrusive multimodal emotion detection in adaptive interfaces: speech and facial expressions

    NARCIS (Netherlands)

    Truong, K.P.; Leeuwen, D.A. van; Neerincx, M.A.

    2007-01-01

    Two unobtrusive modalities for automatic emotion recognition are discussed: speech and facial expressions. First, an overview is given of emotion recognition studies based on a combination of speech and facial expressions. We will identify difficulties concerning data collection, data fusion, system

  8. Brief Report: Representational Momentum for Dynamic Facial Expressions in Pervasive Developmental Disorder

    Science.gov (United States)

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2010-01-01

    Individuals with pervasive developmental disorder (PDD) have difficulty with social communication via emotional facial expressions, but behavioral studies involving static images have reported inconsistent findings about emotion recognition. We investigated whether dynamic presentation of facial expression would enhance subjective perception of…

  9. Do Dynamic Facial Expressions Convey Emotions to Children Better than Do Static Ones?

    Science.gov (United States)

    Widen, Sherri C.; Russell, James A.

    2015-01-01

    Past research has shown that children recognize emotions from facial expressions poorly and improve only gradually with age, but the stimuli in such studies have been static faces. Because dynamic faces include more information, it may well be that children more readily recognize emotions from dynamic facial expressions. The current study of…

  10. Preschooler's Faces in Spontaneous Emotional Contexts--How Well Do They Match Adult Facial Expression Prototypes?

    Science.gov (United States)

    Gaspar, Augusta; Esteves, Francisco G.

    2012-01-01

    Prototypical facial expressions of emotion, also known as universal facial expressions, are the underpinnings of most research concerning recognition of emotions in both adults and children. Data on natural occurrences of these prototypes in natural emotional contexts are rare and difficult to obtain in adults. By recording naturalistic…

  11. Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions.

    Science.gov (United States)

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio

    2015-01-01

    The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants' tendency to over-attribute anger label to other negative facial expressions. Participants' heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants' performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants' tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children's "pre-existing bias" for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim's perceptive and attentive focus on salient environmental social stimuli.

  12. Predicting advertising effectiveness by facial expressions in response to amusing persuasive stimuli

    NARCIS (Netherlands)

    Lewinski, P.; Fransen, M.L.; Tan, E.S.H.

    2014-01-01

    We present a psychophysiological study of facial expressions of happiness (FEH) produced by advertisements using the FaceReader system (Noldus, 2013) for automatic analysis of facial expressions of basic emotions (FEBE; Ekman, 1972). FaceReader scores were associated with self-reports of the adverti

  13. Compound facial expressions of emotion: from basic research to clinical applications.

    Science.gov (United States)

    Du, Shichuan; Martinez, Aleix M

    2015-12-01

    Emotions are sometimes revealed through facial expressions. When these natural facial articulations involve the contraction of the same muscle groups in people of distinct cultural upbringings, this is taken as evidence of a biological origin of these emotions. While past research had identified facial expressions associated with a single internally felt category (eg, the facial expression of happiness when we feel joyful), we have recently studied facial expressions observed when people experience compound emotions (eg, the facial expression of happy surprise when we feel joyful in a surprised way, as, for example, at a surprise birthday party). Our research has identified 17 compound expressions consistently produced across cultures, suggesting that the number of facial expressions of emotion of biological origin is much larger than previously believed. The present paper provides an overview of these findings and shows evidence supporting the view that spontaneous expressions are produced using the same facial articulations previously identified in laboratory experiments. We also discuss the implications of our results in the study of psychopathologies, and consider several open research questions.

  14. Recognition of Facial Expressions and Prosodic Cues with Graded Emotional Intensities in Adults with Asperger Syndrome

    Science.gov (United States)

    Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-01-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…

  15. Effectiveness of Teaching Naming Facial Expression to Children with Autism via Video Modeling

    Science.gov (United States)

    Akmanoglu, Nurgul

    2015-01-01

    This study aims to examine the effectiveness of teaching naming emotional facial expression via video modeling to children with autism. Teaching the naming of emotions (happy, sad, scared, disgusted, surprised, feeling physical pain, and bored) was made by creating situations that lead to the emergence of facial expressions to children…

  16. Do Dynamic Facial Expressions Convey Emotions to Children Better than Do Static Ones?

    Science.gov (United States)

    Widen, Sherri C.; Russell, James A.

    2015-01-01

    Past research has shown that children recognize emotions from facial expressions poorly and improve only gradually with age, but the stimuli in such studies have been static faces. Because dynamic faces include more information, it may well be that children more readily recognize emotions from dynamic facial expressions. The current study of…

  17. Predicting advertising effectiveness by facial expressions in response to amusing persuasive stimuli

    NARCIS (Netherlands)

    Lewinski, P.; Fransen, M.L.; Tan, E.S.H.

    2014-01-01

    We present a psychophysiological study of facial expressions of happiness (FEH) produced by advertisements using the FaceReader system (Noldus, 2013) for automatic analysis of facial expressions of basic emotions (FEBE; Ekman, 1972). FaceReader scores were associated with self-reports of the

  18. Face inversion decreased information about facial identity and expression in face-responsive neurons in macaque area TE.

    Science.gov (United States)

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Ohyama, Kaoru; Kawano, Kenji

    2014-09-10

    To investigate the effect of face inversion and thatcherization (eye inversion) on temporal processing stages of facial information, single neuron activities in the temporal cortex (area TE) of two rhesus monkeys were recorded. Test stimuli were colored pictures of monkey faces (four with four different expressions), human faces (three with four different expressions), and geometric shapes. Modifications were made in each face-picture, and its four variations were used as stimuli: upright original, inverted original, upright thatcherized, and inverted thatcherized faces. A total of 119 neurons responded to at least one of the upright original facial stimuli. A majority of the neurons (71%) showed activity modulations depending on upright and inverted presentations, and a lesser number of neurons (13%) showed activity modulations depending on original and thatcherized face conditions. In the case of face inversion, information about the fine category (facial identity and expression) decreased, whereas information about the global category (monkey vs human vs shape) was retained for both the original and thatcherized faces. Principal component analysis on the neuronal population responses revealed that the global categorization occurred regardless of the face inversion and that the inverted faces were represented near the upright faces in the principal component analysis space. By contrast, the face inversion decreased the ability to represent human facial identity and monkey facial expression. Thus, the neuronal population represented inverted faces as faces but failed to represent the identity and expression of the inverted faces, indicating that the neuronal representation in area TE cause the perceptual effect of face inversion.

  19. 面孔熟悉度对面孔性别与表情相互作用的调节%Facial Familiarity Modulates the Interaction between Facial Gender and Emotional Expression

    Institute of Scientific and Technical Information of China (English)

    吴彬星; 张智君; 孙雨生

    2015-01-01

    The interaction between facial gender and emotional expression has been a consistent debate. In some impactful theories about face cognition, such as the theory of Haxby, Hoffman and Gobbini (2000), facial gender and emotional expression are described as processing independently. But there are also empirical evidences to suggest that facial gender and emotional expression process interdependently. Researchers who concerned about the interaction between facial gender and emotional expression seemed to ignore the role of facial familiarity. Based on given evidences of facial familiarity's effects on the processing of facial gender and on the processing of emotional expression, we hypothesize that facial familiarity will module the relationship between facial gender and emotional expression. The experiments were all based on Garner's selective attention paradigm (Garner, 1976). The main logic of this paradigm is that, if selective attention is possible in the presence of irrelevant dimension, then the two dimensions under investigation could be declared independent or "separable," if not, the dimensions are "integrality." In our experiments, participants were required to make speeded facial gender (or expression) classification to four types of stimuli (angry-male, angry-female, happy-male, happy-female). They were instructed to ignore facial gender (or expression) when making expression (or facial gender) classification. Stimuli were presented in two different conditions termed control and orthogonal conditions, and participants should finish both conditions. In the control condition, stimuli varied along only the relevant dimension and the irrelevant dimension was held constant. In the orthogonal condition, stimuli varied along both the relevant dimension and the irrelevant dimension. In experiment 1 (low facial familiarity), we used 16 strange face stimuli with 16 identities. Each stimulus was presented only once in both control condition and orthogonal condition

  20. Facial Expression Related vMMN: Disentangling Emotional from Neutral Change Detection

    Science.gov (United States)

    Kovarski, Klara; Latinus, Marianne; Charpentier, Judith; Cléry, Helen; Roux, Sylvie; Houy-Durand, Emmanuelle; Saby, Agathe; Bonnet-Brilhault, Frédérique; Batty, Magali; Gomot, Marie

    2017-01-01

    Detection of changes in facial emotional expressions is crucial to communicate and to rapidly and automatically process possible threats in the environment. Recent studies suggest that expression-related visual mismatch negativity (vMMN) reflects automatic processing of emotional changes. In the present study we used a controlled paradigm to investigate the specificity of emotional change-detection. In order to disentangle specific responses to emotional deviants from that of neutral deviants, we presented neutral expression as standard stimulus (p = 0.80) and both angry and neutral expressions as deviants (p = 0.10, each). In addition to an oddball sequence, an equiprobable sequence was presented, to control for refractoriness and low-level differences. Our results showed that in an early time window (100–200 ms), the controlled vMMN was greater than the oddball vMMN only for the angry deviant, suggesting the importance of controlling for refractoriness and stimulus physical features in emotion related studies. Within the controlled vMMN, angry and neutral deviants both elicited early and late peaks occurring at 140 and 310 ms, respectively, but only the emotional vMMN presented sustained amplitude after each peak. By directly comparing responses to emotional and neutral deviants, our study provides evidence of specific activity reflecting the automatic detection of emotional change. This differs from broader “visual” change processing, and suggests the involvement of two partially-distinct pre-attentional systems in the detection of changes in facial expressions. PMID:28194102

  1. The influence of the context on facial expressions perception: a behavioral study on the Kuleshov effect

    DEFF Research Database (Denmark)

    Calbi, Marta; Heimann, Katrin; Barratt, Daniel

    2017-01-01

    Facial expressions are of major importance to understanding the emotions, intentions, and mental states of others. Strikingly, so far most studies on the perception and comprehension of emotions have used isolated facial expressions as stimuli; for example, photographs of actors displaying facial...... expressions belonging to one of the so called ‘basic emotions’. However, our real experience during social interactions is different: facial expressions of emotion are mostly perceived in a wider context, constituted by body language, the surrounding environment, and our beliefs and expectations. Already...... in the early twentieth century, the Russian filmmaker Lev Kuleshov argued that such context could significantly change our interpretation of facial expressions. Prior experiments have shown behavioral effects pointing in this direction, but have only used static images portraying some basic emotions. In our...

  2. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults.

    Science.gov (United States)

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development-The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions-angry, fearful, sad, happy, surprised, and disgusted-and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  3. FaceWarehouse: a 3D facial expression database for visual computing.

    Science.gov (United States)

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  4. Seeing Enemies? A systematic review of anger bias in the perception of facial expressions among anger-prone and aggressive populations

    DEFF Research Database (Denmark)

    Mellentin, Angelina Isabella; Dervisevic, Ajla; Stenager, Elsebeth

    2015-01-01

    that mediates aggression in individuals susceptible to externalizing behavior. This paper therefore aims to clarify whether anger-prone and aggressive populations are emotionally biased towards perceiving others as angry and hostile when processing facial expressions in neuropsychological paradigms...

  5. [Facial expressions of negative emotions in clinical interviews: The development, reliability and validity of a categorical system for the attribution of functions to facial expressions of negative emotions].

    Science.gov (United States)

    Bock, Astrid; Huber, Eva; Peham, Doris; Benecke, Cord

    2015-01-01

    The development (Study 1) and validation (Study 2) of a categorical system for the attribution of facial expressions of negative emotions to specific functions. The facial expressions observed inOPDinterviews (OPD-Task-Force 2009) are coded according to the Facial Action Coding System (FACS; Ekman et al. 2002) and attributed to categories of basic emotional displays using EmFACS (Friesen & Ekman 1984). In Study 1 we analyze a partial sample of 20 interviews and postulate 10 categories of functions that can be arranged into three main categories (interactive, self and object). In Study 2 we rate the facial expressions (n=2320) from the OPD interviews (10 minutes each interview) of 80 female subjects (16 healthy, 64 with DSM-IV diagnosis; age: 18-57 years) according to the categorical system and correlate them with problematic relationship experiences (measured with IIP,Horowitz et al. 2000). Functions of negative facial expressions can be attributed reliably and validly with the RFE-Coding System. The attribution of interactive, self-related and object-related functions allows for a deeper understanding of the emotional facial expressions of patients with mental disorders.

  6. Facial feedback affects valence jugdments of dynamic and static emotional expressions

    Directory of Open Access Journals (Sweden)

    Sylwia eHyniewska

    2015-03-01

    Full Text Available The ability to judge others’ emotions is required for the establishment and maintenance of smooth interactions in a community. Several lines of evidence suggest that the attribution of meaning to a face is influenced by the facial actions produced by an observer during the observation of a face. However, empirical studies testing causal relationships between observers’ facial actions and emotion judgments have reported mixed findings. This study is the first to measure emotion judgments in terms of valence and arousal dimensions while comparing dynamic versus static presentations of facial expressions. We presented pictures and videos of facial expressions of anger and happiness. Participants (N = 36 were asked to differentiate between the gender of faces by activating the corrugator supercilii muscle (brow lowering and zygomaticus major muscle (cheek raising. They were also asked to evaluate the internal states of the stimuli using the affect grid while maintaining the facial action until they finished responding. The cheek-raising condition increased the attributed valence scores compared with the brow-lowering condition. This effect of facial actions was observed for static as well as for dynamic facial expressions. These data suggest that facial feedback mechanisms contribute to the judgment of the valence of emotional facial expressions.

  7. Electromyographic Responses to Emotional Facial Expressions in 6-7 Year Olds with Autism Spectrum Disorders

    Science.gov (United States)

    Deschamps, P. K. H.; Coppes, L.; Kenemans, J. L.; Schutter, D. J. L. G.; Matthys, W.

    2015-01-01

    This study aimed to examine facial mimicry in 6-7 year old children with autism spectrum disorder (ASD) and to explore whether facial mimicry was related to the severity of impairment in social responsiveness. Facial electromyographic activity in response to angry, fearful, sad and happy facial expressions was recorded in twenty 6-7 year old…

  8. Spatial and temporal analysis of gene expression during growth and fusion of the mouse facial prominences.

    Science.gov (United States)

    Feng, Weiguo; Leach, Sonia M; Tipney, Hannah; Phang, Tzulip; Geraci, Mark; Spritz, Richard A; Hunter, Lawrence E; Williams, Trevor

    2009-12-16

    Orofacial malformations resulting from genetic and/or environmental causes are frequent human birth defects yet their etiology is often unclear because of insufficient information concerning the molecular, cellular and morphogenetic processes responsible for normal facial development. We have, therefore, derived a comprehensive expression dataset for mouse orofacial development, interrogating three distinct regions - the mandibular, maxillary and frontonasal prominences. To capture the dynamic changes in the transcriptome during face formation, we sampled five time points between E10.5-E12.5, spanning the developmental period from establishment of the prominences to their fusion to form the mature facial platform. Seven independent biological replicates were used for each sample ensuring robustness and quality of the dataset. Here, we provide a general overview of the dataset, characterizing aspects of gene expression changes at both the spatial and temporal level. Considerable coordinate regulation occurs across the three prominences during this period of facial growth and morphogenesis, with a switch from expression of genes involved in cell proliferation to those associated with differentiation. An accompanying shift in the expression of polycomb and trithorax genes presumably maintains appropriate patterns of gene expression in precursor or differentiated cells, respectively. Superimposed on the many coordinated changes are prominence-specific differences in the expression of genes encoding transcription factors, extracellular matrix components, and signaling molecules. Thus, the elaboration of each prominence will be driven by particular combinations of transcription factors coupled with specific cell:cell and cell:matrix interactions. The dataset also reveals several prominence-specific genes not previously associated with orofacial development, a subset of which we externally validate. Several of these latter genes are components of bidirectional

  9. Spatial and temporal analysis of gene expression during growth and fusion of the mouse facial prominences.

    Directory of Open Access Journals (Sweden)

    Weiguo Feng

    Full Text Available Orofacial malformations resulting from genetic and/or environmental causes are frequent human birth defects yet their etiology is often unclear because of insufficient information concerning the molecular, cellular and morphogenetic processes responsible for normal facial development. We have, therefore, derived a comprehensive expression dataset for mouse orofacial development, interrogating three distinct regions - the mandibular, maxillary and frontonasal prominences. To capture the dynamic changes in the transcriptome during face formation, we sampled five time points between E10.5-E12.5, spanning the developmental period from establishment of the prominences to their fusion to form the mature facial platform. Seven independent biological replicates were used for each sample ensuring robustness and quality of the dataset. Here, we provide a general overview of the dataset, characterizing aspects of gene expression changes at both the spatial and temporal level. Considerable coordinate regulation occurs across the three prominences during this period of facial growth and morphogenesis, with a switch from expression of genes involved in cell proliferation to those associated with differentiation. An accompanying shift in the expression of polycomb and trithorax genes presumably maintains appropriate patterns of gene expression in precursor or differentiated cells, respectively. Superimposed on the many coordinated changes are prominence-specific differences in the expression of genes encoding transcription factors, extracellular matrix components, and signaling molecules. Thus, the elaboration of each prominence will be driven by particular combinations of transcription factors coupled with specific cell:cell and cell:matrix interactions. The dataset also reveals several prominence-specific genes not previously associated with orofacial development, a subset of which we externally validate. Several of these latter genes are components of

  10. Pose-variant facial expression recognition using an embedded image system

    Science.gov (United States)

    Song, Kai-Tai; Han, Meng-Ju; Chang, Shuo-Hung

    2008-12-01

    In recent years, one of the most attractive research areas in human-robot interaction is automated facial expression recognition. Through recognizing the facial expression, a pet robot can interact with human in a more natural manner. In this study, we focus on the facial pose-variant problem. A novel method is proposed in this paper to recognize pose-variant facial expressions. After locating the face position in an image frame, the active appearance model (AAM) is applied to track facial features. Fourteen feature points are extracted to represent the variation of facial expressions. The distance between feature points are defined as the feature values. These feature values are sent to a support vector machine (SVM) for facial expression determination. The pose-variant facial expression is classified into happiness, neutral, sadness, surprise or anger. Furthermore, in order to evaluate the performance for practical applications, this study also built a low resolution database (160x120 pixels) using a CMOS image sensor. Experimental results show that the recognition rate is 84% with the self-built database.

  11. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    Science.gov (United States)

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  12. Empathy, but not mimicry restriction, influences the recognition of change in emotional facial expressions.

    Science.gov (United States)

    Kosonogov, Vladimir; Titova, Alisa; Vorobyeva, Elena

    2015-01-01

    The current study addressed the hypothesis that empathy and the restriction of facial muscles of observers can influence recognition of emotional facial expressions. A sample of 74 participants recognized the subjective onset of emotional facial expressions (anger, disgust, fear, happiness, sadness, surprise, and neutral) in a series of morphed face photographs showing a gradual change (frame by frame) from one expression to another. The high-empathy (as measured by the Empathy Quotient) participants recognized emotional facial expressions at earlier photographs from the series than did low-empathy ones, but there was no difference in the exploration time. Restriction of facial muscles of observers (with plasters and a stick in mouth) did not influence the responses. We discuss these findings in the context of the embodied simulation theory and previous data on empathy.

  13. The Child Affective Facial Expression (CAFE Set: Validity and Reliability from Untrained Adults

    Directory of Open Access Journals (Sweden)

    Vanessa eLoBue

    2015-01-01

    Full Text Available Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development—The Child Affective Facial Expression set (CAFE. The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for 6 emotional facial expressions—angry, fearful, sad, happy, surprised, and disgusted—and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  14. Représentation invariante des expressions faciales. : Application en analyse multimodale des émotions.

    OpenAIRE

    Soladié, Catherine,

    2013-01-01

    More and more applications aim at automating the analysis of human behavior to assist or replace the experts who are conducting these analyzes. This thesis deals with the analysis of facial expressions, which provide key information on these behaviors.Our work proposes an innovative solution to effectively define a facial expression, regardless of the morphology of the subject. The approach is based on the organization of expressions.We show that the organization of expressions, such as defin...

  15. Artificial Neural Networks and Gene Expression Programing based age estimation using facial features

    Directory of Open Access Journals (Sweden)

    Baddrud Z. Laskar

    2015-10-01

    Full Text Available This work is about estimating human age automatically through analysis of facial images. It has got a lot of real-world applications. Due to prompt advances in the fields of machine vision, facial image processing, and computer graphics, automatic age estimation via faces in computer is one of the dominant topics these days. This is due to widespread real-world applications, in areas of biometrics, security, surveillance, control, forensic art, entertainment, online customer management and support, along with cosmetology. As it is difficult to estimate the exact age, this system is to estimate a certain range of ages. Four sets of classifications have been used to differentiate a person’s data into one of the different age groups. The uniqueness about this study is the usage of two technologies i.e., Artificial Neural Networks (ANN and Gene Expression Programing (GEP to estimate the age and then compare the results. New methodologies like Gene Expression Programing (GEP have been explored here and significant results were found. The dataset has been developed to provide more efficient results by superior preprocessing methods. This proposed approach has been developed, tested and trained using both the methods. A public data set was used to test the system, FG-NET. The quality of the proposed system for age estimation using facial features is shown by broad experiments on the available database of FG-NET.

  16. Seeing enemies? A systematic review and treatment proposal for anger bias in the perception of facial expressions among anger-prone and aggressive populations

    DEFF Research Database (Denmark)

    Mellentin, Angelina Isabella; Dervisevic, Ajla; Stenager, Elsebeth

    2014-01-01

    with externalizing and potential aggressive behavior are characterized by attentional bias towards perceiving others as angry and hostile, when processing facial expressions in neuropsychological paradigms. Based on this review the second objective was to recommend potential treatment for antisocial pathology...... literature reveals that anger and hostile bias in the processing of facial expressions could indeed be another distinctive cognitive dysfunction in anger-prone and aggressive populations, in addition to recognition impairment in decoding negative facial expressions. Conclusion: Based on the results......; Attentional Recognition Impairment Modification (ARIM). Applying this treatment, ARIM, supraminally and subliminally presented stimuli could target more automatic processes mediating fear and anxiety and lead them to focus on the neglected ocular or facial expressions. The neuropsychological treatment...

  17. Tactile Stimulation of the Face and the Production of Facial Expressions Activate Neurons in the Primate Amygdala

    Science.gov (United States)

    Mosher, Clayton P.; Zimmerman, Prisca E.; Fuglevand, Andrew J.

    2016-01-01

    Abstract The majority of neurophysiological studies that have explored the role of the primate amygdala in the evaluation of social signals have relied on visual stimuli such as images of facial expressions. Vision, however, is not the only sensory modality that carries social signals. Both humans and nonhuman primates exchange emotionally meaningful social signals through touch. Indeed, social grooming in nonhuman primates and caressing touch in humans is critical for building lasting and reassuring social bonds. To determine the role of the amygdala in processing touch, we recorded the responses of single neurons in the macaque amygdala while we applied tactile stimuli to the face. We found that one-third of the recorded neurons responded to tactile stimulation. Although we recorded exclusively from the right amygdala, the receptive fields of 98% of the neurons were bilateral. A fraction of these tactile neurons were monitored during the production of facial expressions and during facial movements elicited occasionally by touch stimuli. Firing rates arising during the production of facial expressions were similar to those elicited by tactile stimulation. In a subset of cells, combining tactile stimulation with facial movement further augmented the firing rates. This suggests that tactile neurons in the amygdala receive input from skin mechanoceptors that are activated by touch and by compressions and stretches of the facial skin during the contraction of the underlying muscles. Tactile neurons in the amygdala may play a role in extracting the valence of touch stimuli and/or monitoring the facial expressions of self during social interactions. PMID:27752543

  18. Developmental changes in facial expression recognition in Japanese school-age children.

    Science.gov (United States)

    Naruse, Susumu; Hashimoto, Toshiaki; Mori, Kenji; Tsuda, Yoshimi; Takahara, Mitsue; Kagami, Shoji

    2013-01-01

    Facial expressions hold abundant information and play a central part in communication. In daily life, we must construct amicable interpersonal relationships by communicating through verbal and nonverbal behaviors. While school-age is a period of rapid social growth, few studies exist that study developmental changes in facial expression recognition during this age. This study investigated developmental changes in facial expression recognition by examining observers' gaze on others' expressions. 87 school-age children from first to sixth grade (41 boys, 46 girls). The Tobii T60 Eye-tracker(Tobii Technologies, Sweden) was used to gauge eye movement during a task of matching pre-instructed emotion words and facial expressions images (neutral, angry, happy, surprised, sad, disgusted) presented on a monitor fixed at a distance of 50 cm. In the task of matching the six facial expression images and emotion words, the mid- and higher-grade children answered more accurately than the lower-grade children in matching four expressions, excluding neutral and happy. For fixation time and fixation count, the lower-grade children scored lower than other grade children, gazing on all facial expressions significantly fewer times and for shorter periods. It is guessed that the stage from lower grades to middle grades is a turning point in facial recognition.

  19. Effect of facial expressions on student's comprehension recognition in virtual educational environments.

    Science.gov (United States)

    Sathik, Mohamed; Jonathan, Sofia G

    2013-01-01

    The scope of this research is to examine whether facial expression of the students is a tool for the lecturer to interpret comprehension level of students in virtual classroom and also to identify the impact of facial expressions during lecture and the level of comprehension shown by these expressions. Our goal is to identify physical behaviours of the face that are linked to emotional states, and then to identify how these emotional states are linked to student's comprehension. In this work, the effectiveness of a student's facial expressions in non-verbal communication in a virtual pedagogical environment was investigated first. Next, the specific elements of learner's behaviour for the different emotional states and the relevant facial expressions signaled by the action units were interpreted. Finally, it focused on finding the impact of the relevant facial expression on the student's comprehension. Experimentation was done through survey, which involves quantitative observations of the lecturers in the classroom in which the behaviours of students were recorded and statistically analyzed. The result shows that facial expression is the most frequently used nonverbal communication mode by the students in the virtual classroom and facial expressions of the students are significantly correlated to their emotions which helps to recognize their comprehension towards the lecture.

  20. Realistic Facial Expression of Virtual Human Based on Color, Sweat, and Tears Effects

    Directory of Open Access Journals (Sweden)

    Mohammed Hazim Alkawaz

    2014-01-01

    Full Text Available Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry and blushing (anger and happiness is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.

  1. Realistic facial expression of virtual human based on color, sweat, and tears effects.

    Science.gov (United States)

    Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan

    2014-01-01

    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.

  2. Valence Scaling of Dynamic Facial Expressions Is Altered in High-Functioning Subjects with Autism Spectrum Disorders: An FMRI Study

    Science.gov (United States)

    Rahko, Jukka S.; Paakki, Jyri-Johan; Starck, Tuomo H.; Nikkinen, Juha; Pauls, David L.; Katsyri, Jari V.; Jansson-Verkasalo, Eira M.; Carter, Alice S.; Hurtig, Tuula M.; Mattila, Marja-Leena; Jussila, Katja K.; Remes, Jukka J.; Kuusikko-Gauffin, Sanna A.; Sams, Mikko E.; Bolte, Sven; Ebeling, Hanna E.; Moilanen, Irma K.; Tervonen, Osmo; Kiviniemi, Vesa

    2012-01-01

    FMRI was performed with the dynamic facial expressions fear and happiness. This was done to detect differences in valence processing between 25 subjects with autism spectrum disorders (ASDs) and 27 typically developing controls. Valence scaling was abnormal in ASDs. Positive valence induces lower deactivation and abnormally strong activity in ASD…

  3. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    Science.gov (United States)

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding.

  4. The perception of positive and negative facial expressions in unilateral brain-damaged patients: A meta-analysis.

    Science.gov (United States)

    Abbott, Jacenta D; Cumming, Geoff; Fidler, Fiona; Lindell, Annukka K

    2013-01-01

    How the brain is lateralised for emotion processing remains a key question in contemporary neuropsychological research. The right hemisphere hypothesis asserts that the right hemisphere dominates emotion processing, whereas the valence hypothesis holds that positive emotion is processed in the left hemisphere and negative emotion is controlled by the right hemisphere. A meta-analysis was conducted to assess unilateral brain-damaged individuals' performance on tasks of facial emotion perception according to valence. A systematic search of the literature identified seven articles that met the conservative selection criteria and could be included in a meta-analysis. A total of 12 meta-analyses of facial expression perception were constructed assessing identification and labelling tasks according to valence and the side of brain damage. The results demonstrated that both left and right hemisphere damage leads to impairments in emotion perception (identification and labelling) irrespective of valence. Importantly, right hemisphere damage prompted more pronounced emotion perception impairment than left hemisphere damage, across valence, suggesting right hemisphere dominance for emotion perception. Furthermore, right hemisphere damage was associated with a larger tendency for impaired perception of negative than positive emotion across identification and labelling tasks. Overall the findings support Adolphs, Jansari, and Tranel (2001) model whereby the right hemisphere preferentially processes negative facial expressions and both hemispheres process positive facial expressions.

  5. Disgust exposure and explicit emotional appraisal enhance the LPP in response to disgusted facial expressions.

    Science.gov (United States)

    Hartigan, Alex; Richards, Anne

    2017-08-01

    The influence of prior exposure to disgusting imagery and the conscious appraisal of facial expressions were examined in an event-related potential (ERP) experiment. Participants were exposed to either a disgust or a control manipulation and then presented with emotional and neutral expressions. An assessment of the gender of the face was required during half the blocks and an affective assessment of the emotion in the other half. The emotion-related early posterior negativity (EPN) and late positive potential (LPP) ERP components were examined for disgust and neutral stimuli. Results indicated that the EPN was enhanced for disgusted over neutral expressions. Prior disgust exposure modulated the middle phase of the LPP in response to disgusted but not neutral expressions, but only when the emotion of the face was explicitly evaluated. The late LPP was enhanced independently of stimuli when an emotional decision was made. Results demonstrated that exposure to disgusting imagery can affect the subsequent processing of disgusted facial expressions when the emotion is under conscious appraisal.

  6. Learning Expressionlets via Universal Manifold Model for Dynamic Facial Expression Recognition

    Science.gov (United States)

    Liu, Mengyi; Shan, Shiguang; Wang, Ruiping; Chen, Xilin

    2016-12-01

    Facial expression is temporally dynamic event which can be decomposed into a set of muscle motions occurring in different facial regions over various time intervals. For dynamic expression recognition, two key issues, temporal alignment and semantics-aware dynamic representation, must be taken into account. In this paper, we attempt to solve both problems via manifold modeling of videos based on a novel mid-level representation, i.e. \\textbf{expressionlet}. Specifically, our method contains three key stages: 1) each expression video clip is characterized as a spatial-temporal manifold (STM) formed by dense low-level features; 2) a Universal Manifold Model (UMM) is learned over all low-level features and represented as a set of local modes to statistically unify all the STMs. 3) the local modes on each STM can be instantiated by fitting to UMM, and the corresponding expressionlet is constructed by modeling the variations in each local mode. With above strategy, expression videos are naturally aligned both spatially and temporally. To enhance the discriminative power, the expressionlet-based STM representation is further processed with discriminant embedding. Our method is evaluated on four public expression databases, CK+, MMI, Oulu-CASIA, and FERA. In all cases, our method outperforms the known state-of-the-art by a large margin.

  7. Facial pain expression in dementia: a review of the experimental and clinical evidence

    NARCIS (Netherlands)

    Lautenbacher, Stefan; Kunz, Miriam

    2016-01-01

    The analysis of the facial expression of pain promises to be one of the most sensitive tools for the detection of pain in patients with moderate to severe forms of dementia, who can no longer self-report pain. Fine-grain analysis using the Facial Action Coding System (FACS) is possible in research b

  8. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    Science.gov (United States)

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  9. The use of facial expressions for pain assessment purposes in dementia: A narrative review

    NARCIS (Netherlands)

    Oosterman, J.M.; Zwakhalen, S.; Sampson, E.L.; Kunz, M.

    2016-01-01

    Facial expressions convey reliable nonverbal signals about pain and thus are very useful for assessing pain in patients with limited communicative ability, such as patients with dementia. In this review, we present an overview of the available pain observation tools and how they make use of facial e

  10. The use of facial expressions for pain assessment purposes in dementia : a narrative review

    NARCIS (Netherlands)

    Oosterman, Joukje M; Zwakhalen, Sandra; Sampson, Elizabeth L; Kunz, Miriam

    2016-01-01

    Facial expressions convey reliable nonverbal signals about pain and thus are very useful for assessing pain in patients with limited communicative ability, such as patients with dementia. In this review, we present an overview of the available pain observation tools and how they make use of facial e

  11. 3D Face Model Dataset: Automatic Detection of Facial Expressions and Emotions for Educational Environments

    Science.gov (United States)

    Chickerur, Satyadhyan; Joshi, Kartik

    2015-01-01

    Emotion detection using facial images is a technique that researchers have been using for the last two decades to try to analyze a person's emotional state given his/her image. Detection of various kinds of emotion using facial expressions of students in educational environment is useful in providing insight into the effectiveness of tutoring…

  12. Rules versus Prototype Matching: Strategies of Perception of Emotional Facial Expressions in the Autism Spectrum

    Science.gov (United States)

    Rutherford, M. D.; McIntosh, Daniel N.

    2007-01-01

    When perceiving emotional facial expressions, people with autistic spectrum disorders (ASD) appear to focus on individual facial features rather than configurations. This paper tests whether individuals with ASD use these features in a rule-based strategy of emotional perception, rather than a typical, template-based strategy by considering…

  13. Multi-output Laplacian Dynamic Ordinal Regression for Facial Expression Recognition and Intensity Estimation

    NARCIS (Netherlands)

    Rudovic, Ognjen; Pavlovic, Vladimir; Pantic, Maja

    2012-01-01

    Automated facial expression recognition has received increased attention over the past two decades. Existing works in the field usually do not encode either the temporal evolution or the intensity of the observed facial displays. They also fail to jointly model multidimensional (multi-class) continu

  14. Multi-output Laplacian Dynamic Ordinal Regression for Facial Expression Recognition and Intensity Estimation

    NARCIS (Netherlands)

    Rudovic, Ognjen; Pavlovic, Vladimir; Pantic, Maja

    2012-01-01

    Automated facial expression recognition has received increased attention over the past two decades. Existing works in the field usually do not encode either the temporal evolution or the intensity of the observed facial displays. They also fail to jointly model multidimensional (multi-class)

  15. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    Science.gov (United States)

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  16. Facing mixed emotions: Analytic and holistic perception of facial emotion expressions engages separate brain networks.

    Science.gov (United States)

    Meaux, Emilie; Vuilleumier, Patrik

    2016-11-01

    The ability to decode facial emotions is of primary importance for human social interactions; yet, it is still debated how we analyze faces to determine their expression. Here we compared the processing of emotional face expressions through holistic integration and/or local analysis of visual features, and determined which brain systems mediate these distinct processes. Behavioral, physiological, and brain responses to happy and angry faces were assessed by presenting congruent global configurations of expressions (e.g., happy top+happy bottom), incongruent composite configurations (e.g., angry top+happy bottom), and isolated features (e.g. happy top only). Top and bottom parts were always from the same individual. Twenty-six healthy volunteers were scanned using fMRI while they classified the expression in either the top or the bottom face part but ignored information in the other non-target part. Results indicate that the recognition of happy and anger expressions is neither strictly holistic nor analytic Both routes were involved, but with a different role for analytic and holistic information depending on the emotion type, and different weights of local features between happy and anger expressions. Dissociable neural pathways were engaged depending on emotional face configurations. In particular, regions within the face processing network differed in their sensitivity to holistic expression information, which predominantly activated fusiform, inferior occipital areas and amygdala when internal features were congruent (i.e. template matching), whereas more local analysis of independent features preferentially engaged STS and prefrontal areas (IFG/OFC) in the context of full face configurations, but early visual areas and pulvinar when seen in isolated parts. Collectively, these findings suggest that facial emotion recognition recruits separate, but interactive dorsal and ventral routes within the face processing networks, whose engagement may be shaped by

  17. Automatic Change Detection to Facial Expressions in Adolescents: Evidence from Visual Mismatch Negativity Responses

    Directory of Open Access Journals (Sweden)

    Tongran eLiu

    2016-03-01

    Full Text Available Adolescence is a critical period for the neurodevelopment of social-emotional processing, wherein the automatic detection of changes in facial expressions is crucial for the development of interpersonal communication. Two groups of participants (an adolescent group and an adult group were recruited to complete an emotional oddball task featuring on happy and one fearful condition. The measurement of event-related potential (ERP was carried out via electroencephalography (EEG and electrooculography (EOG recording, to detect visual mismatch negativity (vMMN with regard to the automatic detection of changes in facial expressions between the two age groups. The current findings demonstrated that the adolescent group featured more negative vMMN amplitudes than the adult group in the fronto-central region during the 120-200 ms interval. During the time window of 370-450 ms, only the adult group showed better automatic processing on fearful faces than happy faces. The present study indicated that adolescents posses stronger automatic detection of changes in emotional expression relative to adults, and sheds light on the neurodevelopment of automatic processes concerning social-emotional information.

  18. Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time.

    Science.gov (United States)

    Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G

    2014-01-20

    Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of "biologically basic to socially specific" information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four.

  19. Différences dans les expressions faciales selon l'âge et le sexe

    OpenAIRE

    Houstis, Odyssia

    2010-01-01

    Notre but était de valider une méthode qui nous permet de mesurer les expressions faciales, d'appliquer cette méthode pour évaluer quantitativement les expressions faciales chez les adultes et chez les enfants, et de comparer les résultats obtenues pour les enfants en croissance et les adultes afin de pouvoir estimer les effets de l'âge et du sexe. Notre méthode 2-dimensionnelle est fiable et utile pour mesurer quantitativement les expressions faciales d'une façon simple, efficace, et peu coû...

  20. Recognition of Facial Expression Using Eigenvector Based Distributed Features and Euclidean Distance Based Decision Making Technique

    Directory of Open Access Journals (Sweden)

    Jeemoni Kalita

    2013-03-01

    Full Text Available In this paper, an Eigenvector based system has been presented to recognize facial expressions from digital facial images. In the approach, firstly the images were acquired and cropping of five significant portions from the image was performed to extract and store the Eigenvectors specific to the expressions. The Eigenvectors for the test images were also computed, and finally the input facial image was recognized when similarity was obtained by calculating the minimum Euclidean distance between the test image and the different expressions.

  1. Evolutionary Computational Method of Facial Expression Analysis for Content-based Video Retrieval using 2-Dimensional Cellular Automata

    CERN Document Server

    Geetha, P

    2010-01-01

    In this paper, Deterministic Cellular Automata (DCA) based video shot classification and retrieval is proposed. The deterministic 2D Cellular automata model captures the human facial expressions, both spontaneous and posed. The determinism stems from the fact that the facial muscle actions are standardized by the encodings of Facial Action Coding System (FACS) and Action Units (AUs). Based on these encodings, we generate the set of evolutionary update rules of the DCA for each facial expression. We consider a Person-Independent Facial Expression Space (PIFES) to analyze the facial expressions based on Partitioned 2D-Cellular Automata which capture the dynamics of facial expressions and classify the shots based on it. Target video shot is retrieved by comparing the similar expression is obtained for the query frame's face with respect to the key faces expressions in the database video. Consecutive key face expressions in the database that are highly similar to the query frame's face, then the key faces are use...

  2. On the mutual effects of pain and emotion: facial pain expressions enhance pain perception and vice versa are perceived as more arousing when feeling pain.

    Science.gov (United States)

    Reicherts, Philipp; Gerdes, Antje B M; Pauli, Paul; Wieser, Matthias J

    2013-06-01

    Perception of emotional stimuli alters the perception of pain. Although facial expressions are powerful emotional cues - the expression of pain especially plays a crucial role for the experience and communication of pain - research on their influence on pain perception is scarce. In addition, the opposite effect of pain on the processing of emotion has been elucidated even less. To further scrutinize mutual influences of emotion and pain, 22 participants were administered painful and nonpainful thermal stimuli while watching dynamic facial expressions depicting joy, fear, pain, and a neutral expression. As a control condition of low visual complexity, a central fixation cross was presented. Participants rated the intensity of the thermal stimuli and evaluated valence and arousal of the facial expressions. In addition, facial electromyography was recorded as an index of emotion and pain perception. Results show that faces per se, compared to the low-level control condition, decreased pain, suggesting a general attention modulation of pain by complex (social) stimuli. The facial response to painful stimulation revealed a significant correlation with pain intensity ratings. Most important, painful thermal stimuli increased the arousal of simultaneously presented pain expressions, and in turn, pain expressions resulted in higher pain ratings compared to all other facial expressions. These findings demonstrate that the modulation of pain and emotion is bidirectional with pain faces being mostly prone to having mutual influences, and support the view of interconnections between pain and emotion. Furthermore, the special relevance of pain faces for the processing of pain was demonstrated.

  3. Predicting Emotions in Facial Expressions from the Annotations in Naturally Occurring First Encounters

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2014-01-01

    This paper deals with the automatic identification of emotions from the manual annotations of the shape and functions of facial expressions in a Danish corpus of video recorded naturally occurring first encounters. More specifically, a support vector classified is trained on the corpus annotations...... to identify emotions in facial expressions. In the classification experiments, we test to what extent emotions expressed in naturally-occurring conversations can be identified automatically by a classifier trained on the manual annotations of the shape of facial expressions and co-occurring speech tokens. We...... also investigate the relation between emotions and the communicative functions of facial expressions. Both emotion labels and their values in a three dimensional space are identified. The three dimensions are Pleasure, Arousal and Dominance. The results of our experiments indicate that the classifiers...

  4. Predicting Emotions in Facial Expressions from the Annotations in Naturally Occurring First Encounters

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2014-01-01

    This paper deals with the automatic identification of emotions from the manual annotations of the shape and functions of facial expressions in a Danish corpus of video recorded naturally occurring first encounters. More specifically, a support vector classified is trained on the corpus annotation...... to it are reliable and can be used to model and test emotional behaviours in emotional cognitive infocommunicative systems.......This paper deals with the automatic identification of emotions from the manual annotations of the shape and functions of facial expressions in a Danish corpus of video recorded naturally occurring first encounters. More specifically, a support vector classified is trained on the corpus annotations...... to identify emotions in facial expressions. In the classification experiments, we test to what extent emotions expressed in naturally-occurring conversations can be identified automatically by a classifier trained on the manual annotations of the shape of facial expressions and co-occurring speech tokens. We...

  5. The role of the analyst's facial expressions in psychoanalysis and psychoanalytic therapy.

    Science.gov (United States)

    Searles, H F

    This paper, while acknowledging implicitly the importance of transference-distortions in the patient's perceptions of the analyst's countenance, focuses primarily upon the real changes in the latter's facial expressions. The analyst's face has a central role in the phase of therapeutic symbiosis, as well as in subsequent individuation. It is in the realm of the analyst's facial expressions that the borderline patient, for example, can best find a bridge out of autism and into therapeutically symbiotic relatedness with the analyst. During this latter phase, then, each participant's facial expressions "belong" as much to the other as to oneself; that is, the expressions of each person are in the realm of transitional phenomena for both of them. The analyst's facial expressions are a highly, and often centrally, significant dimension of both psychoanalysis and psychoanalytic therapy. Illustrative clinical vignettes are presented from work with both patients who use the couch and those who do not.

  6. Influence of gender in the recognition of basic facial expressions: A critical literature review.

    Science.gov (United States)

    Forni-Santos, Larissa; Osório, Flávia L

    2015-09-22

    To conduct a systematic literature review about the influence of gender on the recognition of facial expressions of six basic emotions. We made a systematic search with the search terms (face OR facial) AND (processing OR recognition OR perception) AND (emotional OR emotion) AND (gender or sex) in PubMed, PsycINFO, LILACS, and SciELO electronic databases for articles assessing outcomes related to response accuracy and latency and emotional intensity. The articles selection was performed according to parameters set by COCHRANE. The reference lists of the articles found through the database search were checked for additional references of interest. In respect to accuracy, women tend to perform better than men when all emotions are considered as a set. Regarding specific emotions, there seems to be no gender-related differences in the recognition of happiness, whereas results are quite heterogeneous in respect to the remaining emotions, especially sadness, anger, and disgust. Fewer articles dealt with the parameters of response latency and emotional intensity, which hinders the generalization of their findings, especially in the face of their methodological differences. The analysis of the studies conducted to date do not allow for definite conclusions concerning the role of the observer's gender in the recognition of facial emotion, mostly because of the absence of standardized methods of investigation.

  7. Facial emotion processing in major depression: a systematic review of neuroimaging findings

    Directory of Open Access Journals (Sweden)

    Stuhrmann Anja

    2011-11-01

    Full Text Available Abstract Background Cognitive models of depression suggest that major depression is characterized by biased facial emotion processing, making facial stimuli particularly valuable for neuroimaging research on the neurobiological correlates of depression. The present review provides an overview of functional neuroimaging studies on abnormal facial emotion processing in major depression. Our main objective was to describe neurobiological differences between depressed patients with major depressive disorder (MDD and healthy controls (HCs regarding brain responsiveness to facial expressions and, furthermore, to delineate altered neural activation patterns associated with mood-congruent processing bias and to integrate these data with recent functional connectivity results. We further discuss methodological aspects potentially explaining the heterogeneity of results. Methods A Medline search was performed up to August 2011 in order to identify studies on emotional face processing in acutely depressed patients compared with HCs. A total of 25 studies using functional magnetic resonance imaging were reviewed. Results The analysis of neural activation data showed abnormalities in MDD patients in a common face processing network, pointing to mood-congruent processing bias (hyperactivation to negative and hypoactivation to positive stimuli particularly in the amygdala, insula, parahippocampal gyrus, fusiform face area, and putamen. Furthermore, abnormal activation patterns were repeatedly found in parts of the cingulate gyrus and the orbitofrontal cortex, which are extended by investigations implementing functional connectivity analysis. However, despite several converging findings, some inconsistencies are observed, particularly in prefrontal areas, probably caused by heterogeneities in paradigms and patient samples. Conclusions Further studies in remitted patients and high-risk samples are required to discern whether the described abnormalities represent

  8. Appraisals Generate Specific Configurations of Facial Muscle Movements in a Gambling Task: Evidence for the Component Process Model of Emotion.

    Science.gov (United States)

    Gentsch, Kornelia; Grandjean, Didier; Scherer, Klaus R

    2015-01-01

    Scherer's Component Process Model provides a theoretical framework for research on the production mechanism of emotion and facial emotional expression. The model predicts that appraisal results drive facial expressions, which unfold sequentially and cumulatively over time. In two experiments, we examined facial muscle activity changes (via facial electromyography recordings over the corrugator, cheek, and frontalis regions) in response to events in a gambling task. These events were experimentally manipulated feedback stimuli which presented simultaneous information directly affecting goal conduciveness (gambling outcome: win, loss, or break-even) and power appraisals (Experiment 1 and 2), as well as control appraisal (Experiment 2). We repeatedly found main effects of goal conduciveness (starting ~600 ms), and power appraisals (starting ~800 ms after feedback onset). Control appraisal main effects were inconclusive. Interaction effects of goal conduciveness and power appraisals were obtained in both experiments (Experiment 1: over the corrugator and cheek regions; Experiment 2: over the frontalis region) suggesting amplified goal conduciveness effects when power was high in contrast to invariant goal conduciveness effects when power was low. Also an interaction of goal conduciveness and control appraisals was found over the cheek region, showing differential goal conduciveness effects when control was high and invariant effects when control was low. These interaction effects suggest that the appraisal of having sufficient control or power affects facial responses towards gambling outcomes. The result pattern suggests that corrugator and frontalis regions are primarily related to cognitive operations that process motivational pertinence, whereas the cheek region would be more influenced by coping implications. Our results provide first evidence demonstrating that cognitive-evaluative mechanisms related to goal conduciveness, control, and power appraisals affect

  9. Dynamic Changes in Amygdala Psychophysiological Connectivity Reveal Distinct Neural Networks for Facial Expressions of Basic Emotions

    Science.gov (United States)

    Diano, Matteo; Tamietto, Marco; Celeghin, Alessia; Weiskrantz, Lawrence; Tatu, Mona-Karina; Bagnis, Arianna; Duca, Sergio; Geminiani, Giuliano; Cauda, Franco; Costa, Tommaso

    2017-01-01

    The quest to characterize the neural signature distinctive of different basic emotions has recently come under renewed scrutiny. Here we investigated whether facial expressions of different basic emotions modulate the functional connectivity of the amygdala with the rest of the brain. To this end, we presented seventeen healthy participants (8 females) with facial expressions of anger, disgust, fear, happiness, sadness and emotional neutrality and analyzed amygdala’s psychophysiological interaction (PPI). In fact, PPI can reveal how inter-regional amygdala communications change dynamically depending on perception of various emotional expressions to recruit different brain networks, compared to the functional interactions it entertains during perception of neutral expressions. We found that for each emotion the amygdala recruited a distinctive and spatially distributed set of structures to interact with. These changes in amygdala connectional patters characterize the dynamic signature prototypical of individual emotion processing, and seemingly represent a neural mechanism that serves to implement the distinctive influence that each emotion exerts on perceptual, cognitive, and motor responses. Besides these differences, all emotions enhanced amygdala functional integration with premotor cortices compared to neutral faces. The present findings thus concur to reconceptualise the structure-function relation between brain-emotion from the traditional one-to-one mapping toward a network-based and dynamic perspective. PMID:28345642

  10. Task difficulty and response complexity modulate affective priming by emotional facial expressions.

    Science.gov (United States)

    Sassi, Federica; Campoy, Guillermo; Castillo, Alejandro; Inuggi, Alberto; Fuentes, Luis J

    2014-05-01

    In this study we used an affective priming task to address the issue of whether the processing of emotional facial expressions occurs automatically independent of attention or attentional resources. Participants had to attend to the emotion expression of the prime face, or to a nonemotional feature of the prime face, the glasses. When participants attended to glasses (emotion unattended), they had to report whether the face wore glasses or not (the glasses easy condition) or whether the glasses were rounded or squared (the shape difficult condition). Affective priming, measured on valence decisions on target words, was mainly defined as interference from incongruent rather than facilitation from congruent trials. Significant priming effects were observed just in the emotion and glasses tasks but not in the shape task. When the key-response mapping increased in complexity, taxing working memory load, affective priming effects were reduced equally for the three types of tasks. Thus, attentional load and working memory load affected additively to the observed reduction in affective priming. These results cast some doubts on the automaticity of processing emotional facial expressions.

  11. Facial Expression Recognition Based on Local Binary Patterns and Kernel Discriminant Isomap

    Directory of Open Access Journals (Sweden)

    Xiaoming Zhao

    2011-10-01

    Full Text Available Facial expression recognition is an interesting and challenging subject. Considering the nonlinear manifold structure of facial images, a new kernel-based manifold learning method, called kernel discriminant isometric mapping (KDIsomap, is proposed. KDIsomap aims to nonlinearly extract the discriminant information by maximizing the interclass scatter while minimizing the intraclass scatter in a reproducing kernel Hilbert space. KDIsomap is used to perform nonlinear dimensionality reduction on the extracted local binary patterns (LBP facial features, and produce low-dimensional discrimimant embedded data representations with striking performance improvement on facial expression recognition tasks. The nearest neighbor classifier with the Euclidean metric is used for facial expression classification. Facial expression recognition experiments are performed on two popular facial expression databases, i.e., the JAFFE database and the Cohn-Kanade database. Experimental results indicate that KDIsomap obtains the best accuracy of 81.59% on the JAFFE database, and 94.88% on the Cohn-Kanade database. KDIsomap outperforms the other used methods such as principal component analysis (PCA, linear discriminant analysis (LDA, kernel principal component analysis (KPCA, kernel linear discriminant analysis (KLDA as well as kernel isometric mapping (KIsomap.

  12. Anodal tDCS targeting the right orbitofrontal cortex enhances facial expression recognition.

    Science.gov (United States)

    Willis, Megan L; Murphy, Jillian M; Ridley, Nicole J; Vercammen, Ans

    2015-12-01

    The orbitofrontal cortex (OFC) has been implicated in the capacity to accurately recognise facial expressions. The aim of the current study was to determine if anodal transcranial direct current stimulation (tDCS) targeting the right OFC in healthy adults would enhance facial expression recognition, compared with a sham condition. Across two counterbalanced sessions of tDCS (i.e. anodal and sham), 20 undergraduate participants (18 female) completed a facial expression labelling task comprising angry, disgusted, fearful, happy, sad and neutral expressions, and a control (social judgement) task comprising the same expressions. Responses on the labelling task were scored for accuracy, median reaction time and overall efficiency (i.e. combined accuracy and reaction time). Anodal tDCS targeting the right OFC enhanced facial expression recognition, reflected in greater efficiency and speed of recognition across emotions, relative to the sham condition. In contrast, there was no effect of tDCS to responses on the control task. This is the first study to demonstrate that anodal tDCS targeting the right OFC boosts facial expression recognition. This finding provides a solid foundation for future research to examine the efficacy of this technique as a means to treat facial expression recognition deficits, particularly in individuals with OFC damage or dysfunction.

  13. Facial Expression Recognition Based on Features Derived From the Distinct LBP and GLCM

    Directory of Open Access Journals (Sweden)

    Gorti Satyanarayana Murty

    2014-01-01

    Full Text Available Automatic recognition of facial expressions can be an important component of natural human-machine interfaces; it may also be used in behavioural science and in clinical practice. Although humans recognise facial expressions virtually without effort or delay, reliable expression recognition by machine is still a challenge. This paper, presents recognition of facial expression by integrating the features derived from Grey Level Co-occurrence Matrix (GLCM with a new structural approach derived from distinct LBP’s (DLBP ona 3 x 3 First order Compressed Image (FCI. The proposed method precisely recognizes the 7 categories of expressions i.e.: neutral, happiness, sadness, surprise, anger, disgust and fear. The proposed method contains three phases. In the first phase each 5 x 5 sub image is compressed into a 3 x 3 sub image. The second phase derives two distinct LBP’s (DLBP using the Triangular patterns between the upper and lower parts of the 3 x 3 sub image. In the third phase GLCM is constructed based on the DLBP’s and feature parameters are evaluated for precise facial expression recognition. The derived DLBP is effective because it integrated with GLCM and provides better classification performance. The proposed method overcomes the disadvantages of statistical and formal LBP methods in estimating the facial expressions. The experimental results demonstrate the effectiveness of the proposed method on facial expression recognition.

  14. Facial expression judgments support a socio-relational model, rather than a negativity bias model of political psychology.

    Science.gov (United States)

    Vigil, Jacob M; Strenth, Chance

    2014-06-01

    Self-reported opinions and judgments may be more rooted in expressive biases than in cognitive processing biases, and ultimately operate within a broader behavioral style for advertising the capacity - versus the trustworthiness - dimension of human reciprocity potential. Our analyses of facial expression judgments of likely voters are consistent with this thesis, and directly contradict one major prediction from the authors' "negativity-bias" model.

  15. Using Kinect for real-time emotion recognition via facial expressions

    Institute of Scientific and Technical Information of China (English)

    Qi-rong MAO; Xin-yu PAN; Yong-zhao ZHAN; Xiang-jun SHEN

    2015-01-01

    Emotion recognition via facial expressions (ERFE) has attracted a great deal of interest with recent advances in artificial intelligence and pattern recognition. Most studies are based on 2D images, and their performance is usually computationally expensive. In this paper, we propose a real-time emotion recognition approach based on both 2D and 3D facial expression features captured by Kinect sensors. To capture the deformation of the 3D mesh during facial expression, we combine the features of animation units (AUs) and feature point positions (FPPs) tracked by Kinect. A fusion algorithm based on improved emotional profiles (IEPs) and maximum confidence is proposed to recognize emotions with these real-time facial expression features. Experiments on both an emotion dataset and a real-time video show the superior performance of our method.

  16. Laterality of facial expressions of emotion: Universal and culture-specific influences

    National Research Council Canada - National Science Library

    Mandal, Manas K; Ambady, Nalini

    2004-01-01

    .... In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others are culturally influenced...

  17. Impaired Attribution of Emotion to Facial Expressions in Anxiety and Major Depression

    NARCIS (Netherlands)

    Demenescu, Liliana R.; Kortekaas, Rudie; den Boer, Johan A.; Aleman, Andre

    2010-01-01

    Background: Recognition of others' emotions is an important aspect of interpersonal communication. In major depression, a significant emotion recognition impairment has been reported. It remains unclear whether the ability to recognize emotion from facial expressions is also impaired in anxiety

  18. Body Cues, Not Facial Expressions, Discriminate Between Intense Positive and Negative Emotions

    NARCIS (Netherlands)

    Aviezer, H.; Trope, Y.; Todorov, A.T.

    2012-01-01

    The distinction between positive and negative emotions is fundamental in emotion models. Intriguingly, neurobiological work suggests shared mechanisms across positive and negative emotions. We tested whether similar overlap occurs in real-life facial expressions. During peak intensities of emotion,

  19. Revealing variations in perception of mental states from dynamic facial expressions: a cautionary note.

    Directory of Open Access Journals (Sweden)

    Elisa Back

    Full Text Available Although a great deal of research has been conducted on the recognition of basic facial emotions (e.g., anger, happiness, sadness, much less research has been carried out on the more subtle facial expressions of an individual's mental state (e.g., anxiety, disinterest, relief. Of particular concern is that these mental state expressions provide a crucial source of communication in everyday life but little is known about the accuracy with which natural dynamic facial expressions of mental states are identified and, in particular, the variability in mental state perception that is produced. Here we report the findings of two studies that investigated the accuracy and variability with which dynamic facial expressions of mental states were identified by participants. Both studies used stimuli carefully constructed using procedures adopted in previous research, and free-report (Study 1 and forced-choice (Study 2 measures of response accuracy and variability. The findings of both studies showed levels of response accuracy that were accompanied by substantial variation in the labels assigned by observers to each mental state. Thus, when mental states are identified from facial expressions in experiments, the identities attached to these expressions appear to vary considerably across individuals. This variability raises important issues for understanding the identification of mental states in everyday situations and for the use of responses in facial expression research.

  20. Inferring Emotion from Facial Expressions in Social Contexts : A Role of Self-Construal?

    Directory of Open Access Journals (Sweden)

    Ana Maria Draghici

    2010-01-01

    Full Text Available 'The study attempted to replicate the findings of Masuda and colleagues (2008, testing a modified hypothesis: when judging people’s emotions from facial expressions, interdependence-primed participants, in contrast to independence-primed participants, incorporate information from the social context, i.e. facial expressions of surrounding people.  This was done in order to check if self construal could be the main variable influencing the cultural differences in emotion perception documented by Masuda and colleagues. Participants viewed cartoon images depicting a happy, sad, or neutral character in its facial expression, surrounded by other characters expressing graded congruent or incongruent facial expressions. The hypothesis was only (partially confirmed for the emotional judgments of a neutral facial expression target. However, a closer look at the individual means indicated both assimilation and contrast effects, without a systematic manner in which the background characters' facial expressions would have been incorporated in the participants' judgments, for either of the priming groups. The results are discussed in terms of priming and priming success, and possible moderators other than self-construal for the effect found by Masuda and colleagues.'

  1. Deep Pain: Exploiting Long Short-Term Memory Networks for Facial Expression Classification

    DEFF Research Database (Denmark)

    Rodriguez, Pau; Cucurull, Guillem; Gonzàlez, Jordi

    2017-01-01

    appearance versus taking into account the whole image: As a result, we outperform current state- of-the-art AUC performance in the UNBC-McMaster Shoulder Pain Expression Archive Database. In addition, to evaluate the generalization properties of our proposed methodology on facial motion recognition, we also...... report competitive results in the Cohn Kanade+ facial expression database....... in pain assessment, which are based on facial features only, we suggest that the performance can be enhanced by feeding the raw frames to deep learning models, outperforming the latest state-of-the-art results while also directly facing the problem of imbalanced data. As a baseline, our approach first...

  2. Synthesis of Facial Image with Expression Based on Muscular Contraction Parameters Using Linear Muscle and Sphincter Muscle

    Science.gov (United States)

    Ahn, Seonju; Ozawa, Shinji

    We aim to synthesize individual facial image with expression based on muscular contraction parameters. We have proposed a method of calculating the muscular contraction parameters from arbitrary face image without using learning for each individual. As a result, we could generate not only individual facial expression, but also the facial expressions of various persons. In this paper, we propose the muscle-based facial model; the facial muscles define both the linear and the novel sphincter. Additionally, we propose a method of synthesizing individual facial image with expression based on muscular contraction parameters. First, the individual facial model with expression is generated by fitting using the arbitrary face image. Next, the muscular contraction parameters are calculated that correspond to the expression displacement of the input face image. Finally, the facial expression is synthesized by the vertex displacements of a neutral facial model based on calculated muscular contraction parameters. Experimental results reveal that the novel sphincter muscle can synthesize facial expressions of the facial image, which corresponds to the actual face image with arbitrary and mouth or eyes expression.

  3. Comparative Study of Human Age Estimation with or without Preclassification of Gender and Facial Expression

    OpenAIRE

    Nguyen, Dat Tien; Cho, So Ra; Shin, Kwang Yong; Bang, Jae Won; Park, Kang Ryoung

    2014-01-01

    Age estimation has many useful applications, such as age-based face classification, finding lost children, surveillance monitoring, and face recognition invariant to age progression. Among many factors affecting age estimation accuracy, gender and facial expression can have negative effects. In our research, the effects of gender and facial expression on age estimation using support vector regression (SVR) method are investigated. Our research is novel in the following four ways. First, the a...

  4. Memory for facial expression is influenced by the background music playing during study

    OpenAIRE

    Woloszyn, Michael R.; Ewert, Laura

    2012-01-01

    The effect of the emotional quality of study-phase background music on subsequent recall for happy and sad facial expressions was investigated. Undergraduates (N = 48) viewed a series of line drawings depicting a happy or sad child in a variety of environments that were each accompanied by happy or sad music. Although memory for faces was very accurate, emotionally incongruent background music biased subsequent memory for facial expressions, increasing the likelihood that happy faces were rec...

  5. Project PAVE (Personality And Vision Experimentation): role of personal and interpersonal resilience in the perception of emotional facial expression.

    Science.gov (United States)

    Tanzer, Michal; Shahar, Golan; Avidan, Galia

    2014-01-01

    The aim of the proposed theoretical model is to illuminate personal and interpersonal resilience by drawing from the field of emotional face perception. We suggest that perception/recognition of emotional facial expressions serves as a central link between subjective, self-related processes and the social context. Emotional face perception constitutes a salient social cue underlying interpersonal communication and behavior. Because problems in communication and interpersonal behavior underlie most, if not all, forms of psychopathology, it follows that perception/recognition of emotional facial expressions impacts psychopathology. The ability to accurately interpret one's facial expression is crucial in subsequently deciding on an appropriate course of action. However, perception in general, and of emotional facial expressions in particular, is highly influenced by individuals' personality and the self-concept. Herein we briefly outline well-established theories of personal and interpersonal resilience and link them to the neuro-cognitive basis of face perception. We then describe the findings of our ongoing program of research linking two well-established resilience factors, general self-efficacy (GSE) and perceived social support (PSS), with face perception. We conclude by pointing out avenues for future research focusing on possible genetic markers and patterns of brain connectivity associated with the proposed model. Implications of our integrative model to psychotherapy are discussed.

  6. Project PAVE (Personality And Vision Experimentation: Role of personal and interpersonal resilience in the perception of emotional facial expression.

    Directory of Open Access Journals (Sweden)

    Michal eTanzer

    2014-08-01

    Full Text Available The aim of the proposed theoretical model is to illuminate personal and interpersonal resilience by drawing from the field of emotional face perception. We suggest that perception/recognition of emotional facial expressions serves as a central link between subjective, self-related processes and the social context. Emotional face perception constitutes a salient social cue underlying interpersonal communication and behavior. Because problems in communication and interpersonal behavior underlie most, if not all, forms of psychopathology, it follows that perception/recognition of emotional facial expressions impacts psychopathology. The ability to accurately interpret one's facial expression is crucial in subsequently deciding on an appropriate course of action. However, perception in general, and of emotional facial expressions in particular, is highly influenced by individuals’ personality and the self-concept. Herein we briefly outline well-established theories of personal and interpersonal resilience and link them to the neuro-cognitive basis of face perception. We then describe the findings of our ongoing program of research linking two well-established resilience factors, general self-efficacy (GSE and perceived social support (PSS, with face perception. We conclude by pointing out avenues for future research focusing on possible genetic markers and patterns of brain connectivity associated with the proposed model. Implications of our integrative model to psychotherapy are discussed.

  7. Impaired recognition of musical emotions and facial expressions following anteromedial temporal lobe excision.

    Science.gov (United States)

    Gosselin, Nathalie; Peretz, Isabelle; Hasboun, Dominique; Baulac, Michel; Samson, Séverine

    2011-10-01

    We have shown that an anteromedial temporal lobe resection can impair the recognition of scary music in a prior study (Gosselin et al., 2005). In other studies (Adolphs et al., 2001; Anderson et al., 2000), similar results have been obtained with fearful facial expressions. These findings suggest that scary music and fearful faces may be processed by common cerebral structures. To assess this possibility, we tested patients with unilateral anteromedial temporal excision and normal controls in two emotional tasks. In the task of identifying musical emotion, stimuli evoked either fear, peacefulness, happiness or sadness. Participants were asked to rate to what extent each stimulus expressed these four emotions on 10-point scales. The task of facial emotion included morphed stimuli whose expression varied from faint to more pronounced and evoked fear, happiness, sadness, surprise, anger or disgust. Participants were requested to select the appropriate label. Most patients were found to be impaired in the recognition of both scary music and fearful faces. Furthermore, the results in both tasks were correlated, suggesting a multimodal representation of fear within the amygdala. However, inspection of individual results showed that recognition of fearful faces can be preserved whereas recognition of scary music can be impaired. Such a dissociation found in two cases suggests that fear recognition in faces and in music does not necessarily involve exactly the same cerebral networks and this hypothesis is discussed in light of the current literature.

  8. Facial expression feature extraction based on tensor analysis%基于张量分析的表情特征提取

    Institute of Scientific and Technical Information of China (English)

    孙波; 刘永娜; 罗继鸿; 张迪; 张树玲; 陈玖冰

    2016-01-01

    表情识别的性能依赖于所提取表情特征的有效性,现有方法提取的表情基本上是人脸与表情的融合体,然而不同个体的人脸差异是表情识别的主要干扰因素。在表情识别时,理想情况是将个体相关的人脸特征和与个体无关的表情特征相分离。针对此问题,在三维空间建立人脸张量;然后用张量分析的方法将人脸特征与表情特征进行分离,使获取的表情参数与人脸无关。从而排除不同个体的人脸差异对表情识别的干扰。最后,在JAFFE表情数据库上验证了该方法的有效性。%Facial expression feature extraction plays an important role in facial expression recognition. The expression feature extracted by existing methods is the combination of individual facial feature and expression feature. Facial recogni-tion is based on different individual facial feature, but facial expression recognition needs to find out the differences of different expressions. What is more important is individual difference will influence the facial expression recognition, and obstruct the expression reorganization rate. In an optimal situation, the related individual facial feature can be separated during the process of facial expression recognition. This paper presents a method that can eliminate interference of facial features when recognizing the facial expression. Firstly, a three order tensor will be built. Secondly, it uses the tensor analysis method to decompose the face feature and the expression feature into the person subspace and the expression subspace respectively. This method can ensure that parameters of expression and face are not related. The evaluation experiment on JAFFE proves the validity of the method.

  9. Effects of Spatial Frequencies on Recognition of Facial Identity and Facial Expression%空间频率信息对面孔身份与表情识别的影响

    Institute of Scientific and Technical Information of China (English)

    汪亚珉; 王志贤; 黄雅梅; 蒋静; 丁锦红

    2011-01-01

    已有面孔身份与表情识别研究提示,高频空间信息可能选择性地与表情识别有关,而低频空间信息则选择性地与身份识别有关.为验证这一假设,操纵空间频率设计三个Garner效应测量实验.实验1测量全频条件下身份表情识别之间的Garner效应,结果显示,相互间的干扰均显著.实验2测量高频条件下的干扰效应,发现表情识别的Garner效应不再显著而身份识别的Garner效应无明显变化,出现分离.实验3测量低频条件下的Garner效应,结果表明,表情与身份识别的Garner效应仍显著,未受高频过滤影响.基于Garner范式,提出面孔识别的可分离度与难度双指标同时考察的方法,对实验结果进行了分析,并由此得出结论,高频空间信息是面孔身份与表情信息分离的有效尺度.%By changing configural or featural/category information, White (2002) revealed that configural changes mainly interfered with facial identity processing while featural alterations largely reduced facial expression processing. With this technique, Goffaux, Hault, Michel, Vuongo, and Rossion (2005) presented that low spatial frequency played in configural changes detection. whereas featural changes detection depended on high spatial frequency. Based on these two studies, we can draw a conclusion that low spatial frequency plays an important role in facial identity recognition while high spatial frequency plays in facial expression recognition. Can this conclusion be really supported by experiments?To test this hypothesis, we conducted three Garner experiments in current study. In terms of the hypothesis,high spatial frequency enhances facial expression recognition but not facial identity recognition, while low spatial frequency facilitates facial identity recognition but not facial expression recognition. Dissociation could be found in recognition of facial identity and facial expression.Three Garner experiments were performed on 96

  10. Relation between facial affect recognition and configural face processing in antipsychotic-free schizophrenia.

    Science.gov (United States)

    Fakra, Eric; Jouve, Elisabeth; Guillaume, Fabrice; Azorin, Jean-Michel; Blin, Olivier

    2015-03-01

    Deficit in facial affect recognition is a well-documented impairment in schizophrenia, closely connected to social outcome. This deficit could be related to psychopathology, but also to a broader dysfunction in processing facial information. In addition, patients with schizophrenia inadequately use configural information-a type of processing that relies on spatial relationships between facial features. To date, no study has specifically examined the link between symptoms and misuse of configural information in the deficit in facial affect recognition. Unmedicated schizophrenia patients (n = 30) and matched healthy controls (n = 30) performed a facial affect recognition task and a face inversion task, which tests aptitude to rely on configural information. In patients, regressions were carried out between facial affect recognition, symptom dimensions and inversion effect. Patients, compared with controls, showed a deficit in facial affect recognition and a lower inversion effect. Negative symptoms and lower inversion effect could account for 41.2% of the variance in facial affect recognition. This study confirms the presence of a deficit in facial affect recognition, and also of dysfunctional manipulation in configural information in antipsychotic-free patients. Negative symptoms and poor processing of configural information explained a substantial part of the deficient recognition of facial affect. We speculate that this deficit may be caused by several factors, among which independently stand psychopathology and failure in correctly manipulating configural information. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  11. The effects of different attentional demands in the identification of emotional facial expressions in Alzheimer's disease.

    Science.gov (United States)

    García-Rodríguez, Beatriz; Vincent, Clarissa; Casares-Guillén, Carmen; Ellgring, Heiner; Frank, Ana

    2012-11-01

    Alzheimer's disease (AD) is characterized by the progressive impairment of mental and emotional functions, including the processing of emotional facial expression (EFE). Deficits in decoding EFE are relevant in social contexts in which information from 2 or more sources may be processed simultaneously. To assess the role of contextual stimuli on EFE processing in AD, we analyzed the ability of patients with AD and healthy elderly adults to identify EFE when simultaneously performing another task. Each of the 6 basic EFEs was presented to 15 patients with AD and 35 controls in a dual task paradigm that is in parallel with a visuospatial or a semantic task. Results show that the decoding of EFEs was impaired in patients with AD when they were simultaneously processing additional visuospatial information, yet not when they were performed in conjunction with a semantic task. These findings suggest that the capacity to interpret emotional states is impaired in AD.

  12. Facial redness, expression, and masculinity influence perceptions of anger and health.

    Science.gov (United States)

    Young, Steven G; Thorstenson, Christopher A; Pazda, Adam D

    2016-12-29

    Past research has found that skin colouration, particularly facial redness, influences the perceived health and emotional state of target individuals. In the current work, we explore several extensions of this past research. In Experiment 1, we manipulated facial redness incrementally on neutral and angry faces and had participants rate each face for anger and health. Different red effects emerged, as perceived anger increased in a linear manner as facial redness increased. Health ratings instead showed a curvilinear trend, as both extreme paleness and redness were rated as less healthy than moderate levels of red. Experiment 2 replicated and extended these findings by manipulating the masculinity of both angry and neutral faces that varied in redness. The results found the effect of red on perceived anger and health was moderated by masculine face structure. Collectively, these results show that facial redness has context dependent effects that vary based on facial expression, appearance, and differentially impact ratings of emotional states and health.

  13. Electromyographic responses to emotional facial expressions in 6-7year olds: a feasibility study.

    Science.gov (United States)

    Deschamps, P K H; Schutte, I; Kenemans, J L; Matthys, W; Schutter, D J L G

    2012-08-01

    Preliminary studies have demonstrated that school-aged children (average age 9-10years) show mimicry responses to happy and angry facial expressions. The aim of the present study was to assess the feasibility of using facial electromyography (EMG) as a method to study facial mimicry responses in younger children aged 6-7years to emotional facial expressions of other children. Facial EMG activity to the presentation of dynamic emotional faces was recorded from the corrugator, zygomaticus, frontalis and depressor muscle in sixty-one healthy participants aged 6-7years. Results showed that the presentation of angry faces was associated with corrugator activation and zygomaticus relaxation, happy faces with an increase in zygomaticus and a decrease in corrugator activation, fearful faces with frontalis activation, and sad faces with a combination of corrugator and frontalis activation. This study demonstrates the feasibility of measuring facial EMG response to emotional facial expressions in 6-7year old children. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Differential expression of wound fibrotic factors between facial and trunk dermal fibroblasts.

    Science.gov (United States)

    Kurita, Masakazu; Okazaki, Mutsumi; Kaminishi-Tanikawa, Akiko; Niikura, Mamoru; Takushima, Akihiko; Harii, Kiyonori

    2012-01-01

    Clinically, wounds on the face tend to heal with less scarring than those on the trunk, but the causes of this difference have not been clarified. Fibroblasts obtained from different parts of the body are known to show different properties. To investigate whether the characteristic properties of facial and trunk wound healing are caused by differences in local fibroblasts, we comparatively analyzed the functional properties of superficial and deep dermal fibroblasts obtained from the facial and trunk skin of seven individuals, with an emphasis on tendency for fibrosis. Proliferation kinetics and mRNA and protein expression of 11 fibrosis-associated factors were investigated. The proliferation kinetics of facial and trunk fibroblasts were identical, but the expression and production levels of profibrotic factors, such as extracellular matrix, transforming growth factor-β1, and connective tissue growth factor mRNA, were lower in facial fibroblasts when compared with trunk fibroblasts, while the expression of antifibrotic factors, such as collagenase, basic fibroblast growth factor, and hepatocyte growth factor, showed no clear trends. The differences in functional properties of facial and trunk dermal fibroblasts were consistent with the clinical tendencies of healing of facial and trunk wounds. Thus, the differences between facial and trunk scarring are at least partly related to the intrinsic nature of the local dermal fibroblasts.

  15. Emotional facial expressions evoke faster orienting responses, but weaker emotional responses at neural and behavioural levels compared to scenes: A simultaneous EEG and facial EMG study.

    Science.gov (United States)

    Mavratzakis, Aimee; Herbert, Cornelia; Walla, Peter

    2016-01-01

    In the current study, electroencephalography (EEG) was recorded simultaneously with facial electromyography (fEMG) to determine whether emotional faces and emotional scenes are processed differently at the neural level. In addition, it was investigated whether these differences can be observed at the behavioural level via spontaneous facial muscle activity. Emotional content of the stimuli did not affect early P1 activity. Emotional faces elicited enhanced amplitudes of the face-sensitive N170 component, while its counterpart, the scene-related N100, was not sensitive to emotional content of scenes. At 220-280ms, the early posterior negativity (EPN) was enhanced only slightly for fearful as compared to neutral or happy faces. However, its amplitudes were significantly enhanced during processing of scenes with positive content, particularly over the right hemisphere. Scenes of positive content also elicited enhanced spontaneous zygomatic activity from 500-750ms onwards, while happy faces elicited no such changes. Contrastingly, both fearful faces and negative scenes elicited enhanced spontaneous corrugator activity at 500-750ms after stimulus onset. However, relative to baseline EMG changes occurred earlier for faces (250ms) than for scenes (500ms) whereas for scenes activity changes were more pronounced over the whole viewing period. Taking into account all effects, the data suggests that emotional facial expressions evoke faster attentional orienting, but weaker affective neural activity and emotional behavioural responses compared to emotional scenes. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  16. Development and validation of an Argentine set of facial expressions of emotion.

    Science.gov (United States)

    Vaiman, Marcelo; Wagner, Mónica Anna; Caicedo, Estefanía; Pereno, Germán Leandro

    2017-02-01

    Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.

  17. Image coding based on maximum entropy partitioning for identifying improbable intensities related to facial expressions

    Indian Academy of Sciences (India)

    SEBA SUSAN; NANDINI AGGARWAL; SHEFALI CHAND; AYUSH GUPTA

    2016-12-01

    In this paper we investigate information-theoretic image coding techniques that assign longer codes to improbable, imprecise and non-distinct intensities in the image. The variable length coding techniques when applied to cropped facial images of subjects with different facial expressions, highlight the set of low probability intensities that characterize the facial expression such as the creases in the forehead, the widening of the eyes and the opening and closing of the mouth. A new coding scheme based on maximum entropy partitioning is proposed in our work, particularly to identify the improbable intensities related to different emotions. The improbable intensities when used as a mask decode the facial expression correctly, providing an effectiveplatform for future emotion categorization experiments

  18. A Facial Expression Classification System Integrating Canny, Principal Component Analysis and Artificial Neural Network

    CERN Document Server

    Thai, Le Hoang; Hai, Tran Son

    2011-01-01

    Facial Expression Classification is an interesting research problem in recent years. There are a lot of methods to solve this problem. In this research, we propose a novel approach using Canny, Principal Component Analysis (PCA) and Artificial Neural Network. Firstly, in preprocessing phase, we use Canny for local region detection of facial images. Then each of local region's features will be presented based on Principal Component Analysis (PCA). Finally, using Artificial Neural Network (ANN)applies for Facial Expression Classification. We apply our proposal method (Canny_PCA_ANN) for recognition of six basic facial expressions on JAFFE database consisting 213 images posed by 10 Japanese female models. The experimental result shows the feasibility of our proposal method.

  19. Dimensional Information-Theoretic Measurement of Facial Emotion Expressions in Schizophrenia

    Directory of Open Access Journals (Sweden)

    Jihun Hamm

    2014-01-01

    Full Text Available Altered facial expressions of emotions are characteristic impairments in schizophrenia. Ratings of affect have traditionally been limited to clinical rating scales and facial muscle movement analysis, which require extensive training and have limitations based on methodology and ecological validity. To improve reliable assessment of dynamic facial expression changes, we have developed automated measurements of facial emotion expressions based on information-theoretic measures of expressivity of ambiguity and distinctiveness of facial expressions. These measures were examined in matched groups of persons with schizophrenia (n=28 and healthy controls (n=26 who underwent video acquisition to assess expressivity of basic emotions (happiness, sadness, anger, fear, and disgust in evoked conditions. Persons with schizophrenia scored higher on ambiguity, the measure of conditional entropy within the expression of a single emotion, and they scored lower on distinctiveness, the measure of mutual information across expressions of different emotions. The automated measures compared favorably with observer-based ratings. This method can be applied for delineating dynamic emotional expressivity in healthy and clinical populations.

  20. Updating schematic emotional facial expressions in working memory: Response bias and sensitivity.

    Science.gov (United States)

    Tamm, Gerly; Kreegipuu, Kairi; Harro, Jaanus; Cowan, Nelson

    2017-01-01

    It is unclear if positive, negative, or neutral emotional expressions have an advantage in short-term recognition. Moreover, it is unclear from previous studies of working memory for emotional faces whether effects of emotions comprise response bias or sensitivity. The aim of this study was to compare how schematic emotional expressions (sad, angry, scheming, happy, and neutral) are discriminated and recognized in an updating task (2-back recognition) in a representative sample of birth cohort of young adults. Schematic facial expressions allow control of identity processing, which is separate from expression processing, and have been used extensively in attention research but not much, until now, in working memory research. We found that expressions with a U-curved mouth (i.e., upwardly curved), namely happy and scheming expressions, favoured a bias towards recognition (i.e., towards indicating that the probe and the stimulus in working memory are the same). Other effects of emotional expression were considerably smaller (1-2% of the variance explained)) compared to a large proportion of variance that was explained by the physical similarity of items being compared. We suggest that the nature of the stimuli plays a role in this. The present application of signal detection methodology with emotional, schematic faces in a working memory procedure requiring fast comparisons helps to resolve important contradictions that have emerged in the emotional perception literature. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Neurophysiology of spontaneous facial expressions: I. Motor control of the upper and lower face is behaviorally independent in adults.

    Science.gov (United States)

    Ross, Elliott D; Gupta, Smita S; Adnan, Asif M; Holden, Thomas L; Havlicek, Joseph; Radhakrishnan, Sridhar

    2016-03-01

    Facial expressions are described traditionally as monolithic entities. However, humans have the capacity to produce facial blends, in which the upper and lower face simultaneously display different emotional expressions. This, in turn, has led to the Component Theory of facial expressions. Recent neuroanatomical studies in monkeys have demonstrated that there are separate cortical motor areas for controlling the upper and lower face that, presumably, also occur in humans. The lower face is represented on the posterior ventrolateral surface of the frontal lobes in the primary motor and premotor cortices and the upper face is represented on the medial surface of the posterior frontal lobes in the supplementary motor and anterior cingulate cortices. Our laboratory has been engaged in a series of studies exploring the perception and production of facial blends. Using high-speed videography, we began measuring the temporal aspects of facial expressions to develop a more complete understanding of the neurophysiology underlying facial expressions and facial blends. The goal of the research presented here was to determine if spontaneous facial expressions in adults are predominantly monolithic or exhibit independent motor control of the upper and lower face. We found that spontaneous facial expressions are very complex and that the motor control of the upper and lower face is overwhelmingly independent, thus robustly supporting the Component Theory of facial expressions. Seemingly monolithic expressions, be they full facial or facial blends, are most likely the result of a timing coincident rather than a synchronous coordination between the ventrolateral and medial cortical motor areas responsible for controlling the lower and upper face, respectively. In addition, we found evidence that the right and left face may also exhibit independent motor control, thus supporting the concept that spontaneous facial expressions are organized predominantly across the horizontal facial

  2. Interpreting Text Messages with Graphic Facial Expression by Deaf and Hearing People

    Directory of Open Access Journals (Sweden)

    Chihiro eSaegusa

    2015-04-01

    Full Text Available In interpreting verbal messages, humans use not only verbal information but also non-verbal signals such as facial expression. For example, when a person says yes with a troubled face, what he or she really means appears ambiguous. In the present study, we examined how deaf and hearing people differ in perceiving real meanings in texts accompanied by representations of facial expression. Deaf and hearing participants were asked to imagine that the face presented on the computer monitor was asked a question from another person (e.g., do you like her?. They observed either a realistic or a schematic face with a different magnitude of positive or negative expression on a computer monitor. A balloon that contained either a positive or negative text response to the question appeared at the same time as the face. Then, participants rated how much the individual on the monitor really meant it (i.e., perceived earnestness, using a 7-point scale. Results showed that the facial expression significantly modulated the perceived earnestness. The influence of positive expression on negative text responses was relatively weaker than that of negative expression on positive responses (i.e., no tended to mean no irrespective of facial expression for both participant groups. However, this asymmetrical effect was stronger in the hearing group. These results suggest that the contribution of facial expression in perceiving real meanings from text messages is qualitatively similar but quantitatively different between deaf and hearing people.

  3. An optimized ERP brain-computer interface based on facial expression changes

    Science.gov (United States)

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be

  4. Differential gene expression in the axotomized facial motor nucleus of presymptomatic SOD1 mice.

    Science.gov (United States)

    Mesnard, Nichole A; Sanders, Virginia M; Jones, Kathryn J

    2011-12-01

    Previously, we compared molecular profiles of one population of wild-type (WT) mouse facial motoneurons (FMNs) surviving with FMNs undergoing significant cell death after axotomy. Regardless of their ultimate fate, injured FMNs respond with a vigorous pro-survival/regenerative molecular response. In contrast, the neuropil surrounding the two different injured FMN populations contained distinct molecular differences that support a causative role for glial and/or immune-derived molecules in directing contrasting responses of the same cell types to the same injury. In the current investigation, we utilized the facial nerve axotomy model and a presymptomatic amyotrophic lateral sclerosis (ALS) mouse (SOD1) model to experimentally mimic the axonal die-back process observed in ALS pathogenesis without the confounding variable of disease onset. Presymptomatic SOD1 mice had a significant decrease in FMN survival compared with WT, which suggests an increased susceptibility to axotomy. Laser microdissection was used to accurately collect uninjured and axotomized facial motor nuclei of WT and presymptomatic SOD1 mice for mRNA expression pattern analyses of pro-survival/pro-regeneration genes, neuropil-specific genes, and genes involved in or responsive to the interaction of FMNs and non-neuronal cells. Axotomized presymptomatic SOD1 FMNs displayed a dynamic pro-survival/regenerative response to axotomy, similar to WT, despite increased cell death. However, significant differences were revealed when the axotomy-induced gene expression response of presymptomatic SOD1 neuropil was compared with WT. We propose that the increased susceptibility of presymptomatic SOD1 FMNs to axotomy-induced cell death and, by extrapolation, disease progression, is not intrinsic to the motoneuron, but rather involves a dysregulated response by non-neuronal cells in the surrounding neuropil.

  5. Are you looking at me? The influence of facial orientation and cultural focus salience on the perception of emotion expressions

    Directory of Open Access Journals (Sweden)

    Konstantinos Kafetsios

    2015-12-01

    Full Text Available We examined the influence of cultural orientation salience on the emotion perception process in a contextualized emotion recognition task. We primed individual and collective focus in participants who later rated the emotion expressions of a central character (target showing a happy, sad, angry, or neutral facial expression in a group setting. Facial orientation of a group of four other persons towards the target person was manipulated so that they faced either “inwards,” towards the central character, or “outwards,” towards the observer. Priming a collectivistic mind-set resulted in the perception of more intense emotions in the “inwards” facial orientation condition when the target showed angry, happy, or neutral expressions. Individualist focus influenced emotion perception in the “outwards” facial orientation condition in few cases. The findings highlight the significance of perceivers’ cultural orientation and social elements of the situation for emotion perception in line with the “culture as situated cognition” model.

  6. Facial Emotion Recognition and Expression in Parkinson's Disease: An Emotional Mirror Mechanism?

    Science.gov (United States)

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J; Kilner, James

    2017-01-01

    Parkinson's disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. For emotion recognition, PD reported lower score than HC for Ekman total score (pemotions sub-scores happiness, fear, anger, sadness (pemotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all pemotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). PD patients showed difficulties in recognizing emotional facial

  7. Facial Emotion Recognition and Expression in Parkinson’s Disease: An Emotional Mirror Mechanism?

    Science.gov (United States)

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J.; Kilner, James

    2017-01-01

    Background and aim Parkinson’s disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Methods Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. Results For emotion recognition, PD reported lower score than HC for Ekman total score (pemotions sub-scores happiness, fear, anger, sadness (pemotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all pemotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). Conclusions PD patients

  8. Speed, amplitude, and asymmetry of lip movement in voluntary puckering and blowing expressions: implications for facial assessment.

    Science.gov (United States)

    Schmidt, Karen L; VanSwearingen, Jessie M; Levenstein, Rachel M

    2005-07-01

    The context of voluntary movement during facial assessment has significant effects on the activity of facial muscles. Using automated facial analysis, we found that healthy subjects instructed to blow produced lip movements that were longer in duration and larger in amplitude than when subjects were instructed to pucker. We also determined that lip movement for puckering expressions was more asymmetric than lip movement in blowing. Differences in characteristics of lip movement were noted using facial movement analysis and were associated with the context of the movement. The impact of the instructions given for voluntary movement on the characteristics of facial movement might have important implications for assessing the capabilities and deficits of movement control in individuals with facial movement disorders. If results generalize to the clinical context, assessment of generally focused voluntary facial expressions might inadequately demonstrate the full range of facial movement capability of an individual patient.

  9. Support vector machine-based facial-expression recognition method combining shape and appearance

    Science.gov (United States)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  10. Human Empathy, Personality and Experience Affect the Emotion Ratings of Dog and Human Facial Expressions.

    Science.gov (United States)

    Kujala, Miiamaaria V; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri

    2017-01-01

    Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people's perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects' personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs' emotional facial expressions.

  11. Does Facial Expressivity Count? How Typically Developing Children Respond Initially to Children with Autism

    Science.gov (United States)

    Stagg, Steven D.; Slavny, Rachel; Hand, Charlotte; Cardoso, Alice; Smith, Pamela

    2014-01-01

    Research investigating expressivity in children with autism spectrum disorder has reported flat affect or bizarre facial expressivity within this population; however, the impact expressivity may have on first impression formation has received little research input. We examined how videos of children with autism spectrum disorder were rated for…

  12. Interpretive bias of ambiguous facial expressions in older adults with depressive symptoms.

    Science.gov (United States)

    Dai, Bibing; Li, Juan; Chen, Tingji; Li, Qi

    2015-03-01

    Cognitive theories of emotional disorders indicate that biases in cognitive processes, such as attention, memory, and interpretation, are common factors that indicate vulnerability to these disorders, although their form varies according to the type of disorder. However, most of the studies have focused on adolescence and adulthood. It is still uncertain whether cognitive biases are risk factors for late-life depression. The present study sought to explore the role of interpretive bias in older adults with depressive symptoms and whether this effect is independent of basic cognitive abilities. Therefore, 18 older adults with depressive symptoms and 21 healthy controls were compared with an ambiguous facial expression identification task, a Mini Mental Status Examination, a Trail Making Test A and B, and a Word Fluency Test. Findings revealed that the depressive group was more likely to identify more ambiguous happy-sad facial expressions as indicative of sadness than were the healthy controls, but the two groups showed no significant differences in the cognitive test scores. These results suggest that interpretive bias indicates vulnerability to late-life depression, but basic cognitive abilities may have no influence in this context.

  13. 5-HTTLPR modulates the recognition accuracy and exploration of emotional facial expressions

    Directory of Open Access Journals (Sweden)

    Sabrina eBoll

    2014-07-01

    Full Text Available Individual genetic differences in the serotonin transporter-linked polymorphic region (5-HTTLPR have been associated with variations in the sensitivity to social and emotional cues as well as altered amygdala reactivity to facial expressions of emotion. Amygdala activation has further been shown to trigger gaze changes towards diagnostically relevant facial features. The current study examined whether altered socio-emotional reactivity in variants of the 5-HTTLPR promoter polymorphism reflects individual differences in attending to diagnostic features of facial expressions. For this purpose, visual exploration of emotional facial expressions was compared between a low (n=39 and a high (n=40 5-HTT expressing group of healthy human volunteers in an eye tracking paradigm. Emotional faces were presented while manipulating the initial fixation such that saccadic changes towards the eyes and towards the mouth could be identified. We found that the low versus the high 5-HTT group demonstrated greater accuracy with regard to emotion classifications, particularly when faces were presented for a longer duration. No group differences in gaze orientation towards diagnostic facial features could be observed. However, participants in the low 5-HTT group exhibited more and faster fixation changes for certain emotions when faces were presented for a longer duration and overall face fixation times were reduced for this genotype group. These results suggest that the 5-HTT gene influences social perception by modulating the general vigilance to social cues rather than selectively affecting the pre-attentive detection of diagnostic facial features.

  14. Emotion Index of Cover Song Music Video Clips based on Facial Expression Recognition

    DEFF Research Database (Denmark)

    Vidakis, Nikolaos; Kavallakis, George; Triantafyllidis, Georgios

    2017-01-01

    This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use...... of a neural network system using the features extracted by the SIFT algorithm. Also we support the need of this fusion of different expression recognition algorithms, because of the way that emotions are linked to facial expressions in music video clips....

  15. 3D facial expression recognition based on histograms of surface differential quantities

    KAUST Repository

    Li, Huibin

    2011-01-01

    3D face models accurately capture facial surfaces, making it possible for precise description of facial activities. In this paper, we present a novel mesh-based method for 3D facial expression recognition using two local shape descriptors. To characterize shape information of the local neighborhood of facial landmarks, we calculate the weighted statistical distributions of surface differential quantities, including histogram of mesh gradient (HoG) and histogram of shape index (HoS). Normal cycle theory based curvature estimation method is employed on 3D face models along with the common cubic fitting curvature estimation method for the purpose of comparison. Based on the basic fact that different expressions involve different local shape deformations, the SVM classifier with both linear and RBF kernels outperforms the state of the art results on the subset of the BU-3DFE database with the same experimental setting. © 2011 Springer-Verlag.

  16. Psychopathic traits in adolescents and recognition of emotion in facial expressions

    Directory of Open Access Journals (Sweden)

    Silvio José Lemos Vasconcellos

    2014-12-01

    Full Text Available Recent studies have investigated the ability of adult psychopaths and children with psychopathy traits to identify specific facial expressions of emotion. Conclusive results have not yet been found regarding whether psychopathic traits are associated with a specific deficit in the ability of identifying negative emotions such as fear and sadness. This study compared 20 adolescents with psychopathic traits and 21 adolescents without these traits in terms of their ability to recognize facial expressions of emotion using facial stimuli presented during 200 milliseconds, 500 milliseconds, and 1 second expositions. Analyses indicated significant differences between the two groups' performances only for fear and when displayed for 200 ms. This finding is consistent with findings from other studies in the field and suggests that controlling the duration of exposure to affective stimuli in future studies may help to clarify the mechanisms underlying the facial affect recognition deficits of individuals with psychopathic traits.

  17. Analysis and evaluation of facial expression and perceived age for designing automotive frontal views

    Science.gov (United States)

    Fujiwara, Takayuki; Kawasumi, Mikiko; Koshimizu, Hiroyasu

    2007-01-01

    We propose a method for quantifying the design of automotive frontal view based on the research on the human visual impression to the facial expression. We have researched to evaluate the automotive frontal face by using the facial words and the perceived age. Then we verified experimentally how effectively the line drawing image could work and coche-PICASSO image could be used for the image stimulation. As a result of this paper, a part of the facial words could be strongly correlated to both the facial expressions and the perceived age in the line drawing image. Besides, it was also known that the perceived age in the coche-PICASSO image was always younger than those of the line drawing image.

  18. Interactive effects between gaze direction and facial expression on attentional resources deployment: the task instruction and context matter.

    Science.gov (United States)

    Ricciardelli, Paola; Lugli, Luisa; Pellicano, Antonello; Iani, Cristina; Nicoletti, Roberto

    2016-02-22

    In three experiments, we tested whether the amount of attentional resources needed to process a face displaying neutral/angry/fearful facial expressions with direct or averted gaze depends on task instructions, and face presentation. To this end, we used a Rapid Serial Visual Presentation paradigm in which participants in Experiment 1 were first explicitly asked to discriminate whether the expression of a target face (T1) with direct or averted gaze was angry or neutral, and then to judge the orientation of a landscape (T2). Experiment 2 was identical to Experiment 1 except that participants had to discriminate the gender of the face of T1 and fearful faces were also presented randomly inter-mixed within each block of trials. Experiment 3 differed from Experiment 2 only because angry and fearful faces were never presented within the same block. The findings indicated that the presence of the attentional blink (AB) for face stimuli depends on specific combinations of gaze direction and emotional facial expressions and crucially revealed that the contextual factors (e.g., explicit instruction to process the facial expression and the presence of other emotional faces) can modify and even reverse the AB, suggesting a flexible and more contextualized deployment of attentional resources in face processing.

  19. Laterality of Facial Expressions of Emotion: Universal and Culture-Specific Influences

    Directory of Open Access Journals (Sweden)

    Manas K. Mandal

    2004-01-01

    Full Text Available Recent research indicates that (a the perception and expression of facial emotion are lateralized to a great extent in the right hemisphere, and, (b whereas facial expressions of emotion embody universal signals, culture-specific learning moderates the expression and interpretation of these emotions. In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others are culturally influenced. We suggest that the left side of the face is more expressive of emotions, is more uninhibited, and displays culture-specific emotional norms. The right side of face, on the other hand, is less susceptible to cultural display norms and exhibits more universal emotional signals.

  20. Laterality of facial expressions of emotion: Universal and culture-specific influences.

    Science.gov (United States)

    Mandal, Manas K; Ambady, Nalini

    2004-01-01

    Recent research indicates that (a) the perception and expression of facial emotion are lateralized to a great extent in the right hemisphere, and, (b) whereas facial expressions of emotion embody universal signals, culture-specific learning moderates the expression and interpretation of these emotions. In the present article, we review the literature on laterality and universality, and propose that, although some components of facial expressions of emotion are governed biologically, others are culturally influenced. We suggest that the left side of the face is more expressive of emotions, is more uninhibited, and displays culture-specific emotional norms. The right side of face, on the other hand, is less susceptible to cultural display norms and exhibits more universal emotional signals. Copyright 2004 IOS Press

  1. 5-HTTLPR modulates the recognition accuracy and exploration of emotional facial expressions

    OpenAIRE

    2014-01-01

    Individual genetic differences in the serotonin transporter-linked polymorphic region (5-HTTLPR) have been associated with variations in the sensitivity to social and emotional cues as well as altered amygdala reactivity to facial expressions of emotion. Amygdala activation has further been shown to trigger gaze changes towards diagnostically relevant facial features. The current study examined whether altered socio-emotional reactivity in variants of the 5-HTTLPR promoter polymorphism reflec...

  2. Facial expressions, their communicatory functions and neuro-cognitive substrates.

    Science.gov (United States)

    Blair, R J R

    2003-03-29

    Human emotional expressions serve a crucial communicatory role allowing the rapid transmission of valence information from one individual to another. This paper will review the literature on the neural mechanisms necessary for this communication: both the mechanisms involved in the production of emotional expressions and those involved in the interpretation of the emotional expressions of others. Finally, reference to the neuro-psychiatric disorders of autism, psychopathy and acquired sociopathy will be made. In these conditions, the appropriate processing of emotional expressions is impaired. In autism, it is argued that the basic response to emotional expressions remains intact but that there is impaired ability to represent the referent of the individual displaying the emotion. In psychopathy, the response to fearful and sad expressions is attenuated and this interferes with socialization resulting in an individual who fails to learn to avoid actions that result in harm to others. In acquired sociopathy, the response to angry expressions in particular is attenuated resulting in reduced regulation of social behaviour.

  3. Automated Facial Expression Recognition Using Gradient-Based Ternary Texture Patterns

    Directory of Open Access Journals (Sweden)

    Faisal Ahmed

    2013-01-01

    Full Text Available Recognition of human expression from facial image is an interesting research area, which has received increasing attention in the recent years. A robust and effective facial feature descriptor is the key to designing a successful expression recognition system. Although much progress has been made, deriving a face feature descriptor that can perform consistently under changing environment is still a difficult and challenging task. In this paper, we present the gradient local ternary pattern (GLTP—a discriminative local texture feature for representing facial expression. The proposed GLTP operator encodes the local texture of an image by computing the gradient magnitudes of the local neighborhood and quantizing those values in three discrimination levels. The location and occurrence information of the resulting micropatterns is then used as the face feature descriptor. The performance of the proposed method has been evaluated for the person-independent face expression recognition task. Experiments with prototypic expression images from the Cohn-Kanade (CK face expression database validate that the GLTP feature descriptor can effectively encode the facial texture and thus achieves improved recognition performance than some well-known appearance-based facial features.

  4. Internal representations reveal cultural diversity in expectations of facial expressions of emotion.

    Science.gov (United States)

    Jack, Rachael E; Caldara, Roberto; Schyns, Philippe G

    2012-02-01

    Facial expressions have long been considered the "universal language of emotion." Yet consistent cultural differences in the recognition of facial expressions contradict such notions (e.g., R. E. Jack, C. Blais, C. Scheepers, P. G. Schyns, & R. Caldara, 2009). Rather, culture--as an intricate system of social concepts and beliefs--could generate different expectations (i.e., internal representations) of facial expression signals. To investigate, they used a powerful psychophysical technique (reverse correlation) to estimate the observer-specific internal representations of the 6 basic facial expressions of emotion (i.e., happy, surprise, fear, disgust, anger, and sad) in two culturally distinct groups (i.e., Western Caucasian [WC] and East Asian [EA]). Using complementary statistical image analyses, cultural specificity was directly revealed in these representations. Specifically, whereas WC internal representations predominantly featured the eyebrows and mouth, EA internal representations showed a preference for expressive information in the eye region. Closer inspection of the EA observer preference revealed a surprising feature: changes of gaze direction, shown primarily among the EA group. For the first time, it is revealed directly that culture can finely shape the internal representations of common facial expressions of emotion, challenging notions of a biologically hardwired "universal language of emotion."

  5. P2-31: In-Group Advantage in Negative Facial Expressions

    Directory of Open Access Journals (Sweden)

    Li-Chuan Hsu

    2012-10-01

    Full Text Available To perceive facial expressions is suggested to be universal. However, studies have shown the in-group advantage (IGA in recognition of facial expressions (e.g., Matsumoto, 1989, 1992 which is that people understand emotions more accurately when these emotions are expressed by members of their own culture group. A balanced design was used to investigate whether this IGA was showed in Western people and as well as in Asian people (Taiwanese. An emotional identification task was adopted to ask participants to identify positive (happy and negative (sadness, fear, and anger faces among Eastern and Western faces. We used Eastern faces from the Taiwanese Facial Expression Image Database (Chen, 2007 and Western faces from Ekman & Frisen (1979. Both reaction times and accuracies of performance were measured. Results showed that even all participants can identify positive and negative faces accurately; Asia participants responded significantly faster to negative Eastern faces than to negative Western faces. The similar IGA effect was also shown in Western participants. However, no such culture difference was found to positive faces. The results revealed the in-group advantage of the perception of facial expressions was specific to negative emotions and question the universality of perceiving facial expressions.

  6. Social anxiety and trustworthiness judgments of dynamic facial expressions of emotion.

    Science.gov (United States)

    Gutiérrez-García, Aida; Calvo, Manuel G

    2016-09-01

    Perception of trustworthiness in other people is essential for successful social interaction. Facial expressions-as conveyers of feelings and intentions-are an important source of this information. We investigated how social anxiety is related to biases in the judgment of faces towards un/trustworthiness depending on type of emotional expression and expressive intensity. Undergraduates with clinical levels of social anxiety and low-anxiety controls were presented with 1-s video-clips displaying facial happiness, anger, fear, sadness, disgust, surprise, or neutrality, at various levels of emotional intensity. Participants judged how trustworthy the expressers looked like. Social anxiety was associated with enhanced distrust towards angry and disgusted expressions, and this occurred at lower intensity thresholds, relative to non-anxious controls. There was no effect for other negative expressions (sadness and fear), basically ambiguous expressions (surprise and neutral), or happy faces. The social anxiety and the control groups consisted of more females than males, although this gender disproportion was the same in both groups. Also, the expressive speed rate was different for the various intensity conditions, although such differences were equated for all the expressions and for both groups. Individuals with high social anxiety overestimate perceived social danger even from subtle facial cues, thus exhibiting a threat-related interpretative bias in the form of untrustworthiness judgments. Such a bias is, nevertheless, limited to facial expressions conveying direct threat such as hostility and rejection. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Project PAVE (Personality And Vision Experimentation): role of personal and interpersonal resilience in the perception of emotional facial expression

    OpenAIRE

    Michal eTanzer; Golan eShahar; Galia eAvidan

    2014-01-01

    The aim of the proposed theoretical model is to illuminate personal and interpersonal resilience by drawing from the field of emotional face perception. We suggest that perception/recognition of emotional facial expressions serves as a central link between subjective, self-related processes and the social context. Emotional face perception constitutes a salient social cue underlying interpersonal communication and behavior. Because problems in communication and interpersonal behavior underlie...

  8. Experience-based human perception of facial expressions in Barbary macaques (Macaca sylvanus

    Directory of Open Access Journals (Sweden)

    Laëtitia Maréchal

    2017-06-01

    Full Text Available Background Facial expressions convey key cues of human emotions, and may also be important for interspecies interactions. The universality hypothesis suggests that six basic emotions (anger, disgust, fear, happiness, sadness, and surprise should be expressed by similar facial expressions in close phylogenetic species such as humans and nonhuman primates. However, some facial expressions have been shown to differ in meaning between humans and nonhuman primates like macaques. This ambiguity in signalling emotion can lead to an increased risk of aggression and injuries for both humans and animals. This raises serious concerns for activities such as wildlife tourism where humans closely interact with wild animals. Understanding what factors (i.e., experience and type of emotion affect ability to recognise emotional state of nonhuman primates, based on their facial expressions, can enable us to test the validity of the universality hypothesis, as well as reduce the risk of aggression and potential injuries in wildlife tourism. Methods The present study investigated whether different levels of experience of Barbary macaques, Macaca sylvanus, affect the ability to correctly assess different facial expressions related to aggressive, distressed, friendly or neutral states, using an online questionnaire. Participants’ level of experience was defined as either: (1 naïve: never worked with nonhuman primates and never or rarely encountered live Barbary macaques; (2 exposed: shown pictures of the different Barbary macaques’ facial expressions along with the description and the corresponding emotion prior to undertaking the questionnaire; (3 expert: worked with Barbary macaques for at least two months. Results Experience with Barbary macaques was associated with better performance in judging their emotional state. Simple exposure to pictures of macaques’ facial expressions improved the ability of inexperienced participants to better discriminate neutral

  9. Experience-based human perception of facial expressions in Barbary macaques (Macaca sylvanus).

    Science.gov (United States)

    Maréchal, Laëtitia; Levy, Xandria; Meints, Kerstin; Majolo, Bonaventura

    2017-01-01

    Facial expressions convey key cues of human emotions, and may also be important for interspecies interactions. The universality hypothesis suggests that six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) should be expressed by similar facial expressions in close phylogenetic species such as humans and nonhuman primates. However, some facial expressions have been shown to differ in meaning between humans and nonhuman primates like macaques. This ambiguity in signalling emotion can lead to an increased risk of aggression and injuries for both humans and animals. This raises serious concerns for activities such as wildlife tourism where humans closely interact with wild animals. Understanding what factors (i.e., experience and type of emotion) affect ability to recognise emotional state of nonhuman primates, based on their facial expressions, can enable us to test the validity of the universality hypothesis, as well as reduce the risk of aggression and potential injuries in wildlife tourism. The present study investigated whether different levels of experience of Barbary macaques, Macaca sylvanus, affect the ability to correctly assess different facial expressions related to aggressive, distressed, friendly or neutral states, using an online questionnaire. Participants' level of experience was defined as either: (1) naïve: never worked with nonhuman primates and never or rarely encountered live Barbary macaques; (2) exposed: shown pictures of the different Barbary macaques' facial expressions along with the description and the corresponding emotion prior to undertaking the questionnaire; (3) expert: worked with Barbary macaques for at least two months. Experience with Barbary macaques was associated with better performance in judging their emotional state. Simple exposure to pictures of macaques' facial expressions improved the ability of inexperienced participants to better discriminate neutral and distressed faces, and a trend was found for

  10. Feature Extraction for Facial Expression Recognition based on Hybrid Face Regions

    Directory of Open Access Journals (Sweden)

    LAJEVARDI, S.M.

    2009-10-01

    Full Text Available Facial expression recognition has numerous applications, including psychological research, improved human computer interaction, and sign language translation. A novel facial expression recognition system based on hybrid face regions (HFR is investigated. The expression recognition system is fully automatic, and consists of the following modules: face detection, facial detection, feature extraction, optimal features selection, and classification. The features are extracted from both whole face image and face regions (eyes and mouth using log Gabor filters. Then, the most discriminate features are selected based on mutual information criteria. The system can automatically recognize six expressions: anger, disgust, fear, happiness, sadness and surprise. The selected features are classified using the Naive Bayesian (NB classifier. The proposed method has been extensively assessed using Cohn-Kanade database and JAFFE database. The experiments have highlighted the efficiency of the proposed HFR method in enhancing the classification rate.

  11. In Your Face: Startle to Emotional Facial Expressions Depends on Face Direction

    Science.gov (United States)

    Michalsen, Henriette; Øvervoll, Morten

    2017-01-01

    Although faces are often included in the broad category of emotional visual stimuli, the affective impact of different facial expressions is not well documented. The present experiment investigated startle electromyographic responses to pictures of neutral, happy, angry, and fearful facial expressions, with a frontal face direction (directed) and at a 45° angle to the left (averted). Results showed that emotional facial expressions interact with face direction to produce startle potentiation: Greater responses were found for angry expressions, compared with fear and neutrality, with directed faces. When faces were averted, fear and neutrality produced larger responses compared with anger and happiness. These results are in line with the notion that startle is potentiated to stimuli signaling threat. That is, a forward directed angry face may signal a threat toward the observer, and a fearful face directed to the side may signal a possible threat in the environment.

  12. A Simultaneous Facial Motion Tracking and Expression Recognition Algorithm%一种同步人脸运动跟踪与表情识别算法

    Institute of Scientific and Technical Information of China (English)

    於俊; 汪增福; 李睿

    2015-01-01

    In view of facial expression recognition from monocular video with dynamic background ,a real-time system was proposed based on the algorithm in which facial motion is tracked and facial expression is recognized simultaneously .Firstly ,online appearance model and cylinder head model were combined to track 3D facial motion from video in framework of particle filtering ;secondly ,the static knowledge of facial expression was extracted through facial expression anatomy ;thirdly ,the dynamic knowledge of facial expression was extracted through manifold learning ;fourthly ,facial expression was retrieved by fusing the static knowledge and dynamic knowledge during facial motion tracking process .The experiments results confirmed the advantage on facial expression recognition even in the presence of significant head pose and facial expression variations of this system .%针对单视频动态变化背景下的人脸表情识别问题,提出了一种同步人脸运动跟踪和表情识别算法,并在此基础上构建了一个实时系统。该系统达到了如下目标:首先在粒子滤波框架下结合在线外观模型和柱状几何模型进行人脸三维运动跟踪;接着基于生理知识来提取人脸表情的静态信息;然后基于流形学习来提取人脸表情的动态信息;最后在人脸运动跟踪过程中,结合人脸表情静态信息和动态信息来进行表情识别。实验结果表明,该系统在大姿态和丰富表情下具有较好的综合优势。

  13. Perception of stereoscopic direct gaze: The effects of interaxial distance and emotional facial expressions.

    Science.gov (United States)

    Hakala, Jussi; Kätsyri, Jari; Takala, Tapio; Häkkinen, Jukka

    2016-07-01

    Gaze perception has received considerable research attention due to its importance in social interaction. The majority of recent studies have utilized monoscopic pictorial gaze stimuli. However, a monoscopic direct gaze differs from a live or stereoscopic gaze. In the monoscopic condition, both eyes of the observer receive a direct gaze, whereas in live and stereoscopic conditions, only one eye receives a direct gaze. In the present study, we examined the implications of the difference between monoscopic and stereoscopic direct gaze. Moreover, because research has shown that stereoscopy affects the emotions elicited by facial expressions, and facial expressions affect the range of directions where an observer perceives mutual gaze-the cone of gaze-we studied the interaction effect of stereoscopy and facial expressions on gaze perception. Forty observers viewed stereoscopic images wherein one eye of the observer received a direct gaze while the other eye received a horizontally averted gaze at five different angles corresponding to five interaxial distances between the cameras in stimulus acquisition. In addition to monoscopic and stereoscopic conditions, the stimuli included neutral, angry, and happy facial expressions. The observers judged the gaze direction and mutual gaze of four lookers. Our results show that the mean of the directions received by the left and right eyes approximated the perceived gaze direction in the stereoscopic semidirect gaze condition. The probability of perceiving mutual gaze in the stereoscopic condition was substantially lower compared with monoscopic direct gaze. Furthermore, stereoscopic semidirect gaze significantly widened the cone of gaze for happy facial expressions.

  14. Facial EMG responses to emotional expressions are related to emotion perception ability.

    Science.gov (United States)

    Künecke, Janina; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Wilhelm, Oliver

    2014-01-01

    Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a "reactivation" of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG)--in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110) in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective.

  15. Facial EMG responses to emotional expressions are related to emotion perception ability.

    Directory of Open Access Journals (Sweden)

    Janina Künecke

    Full Text Available Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a "reactivation" of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG--in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110 in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective.

  16. "You Should Have Seen the Look on Your Face…": Self-awareness of Facial Expressions.

    Science.gov (United States)

    Qu, Fangbing; Yan, Wen-Jing; Chen, Yu-Hsin; Li, Kaiyun; Zhang, Hui; Fu, Xiaolan

    2017-01-01

    The awareness of facial expressions allows one to better understand, predict, and regulate his/her states to adapt to different social situations. The present research investigated individuals' awareness of their own facial expressions and the influence of the duration and intensity of expressions in two self-reference modalities, a real-time condition and a video-review condition. The participants were instructed to respond as soon as they became aware of any facial movements. The results revealed that awareness rates were 57.79% in the real-time condition and 75.92% in the video-review condition. The awareness rate was influenced by the intensity and (or) the duration. The intensity thresholds for individuals to become aware of their own facial expressions were calculated using logistic regression models. The results of Generalized Estimating Equations (GEE) revealed that video-review awareness was a significant predictor of real-time awareness. These findings extend understandings of human facial expression self-awareness in two modalities.

  17. Acute pathological changes of facial nucleus and expressions of postsynaptic density protein-95 following facial nerve injury of varying severity A semi-quantitative analysis

    Institute of Scientific and Technical Information of China (English)

    Jingjing Li; Wenlong Luo

    2008-01-01

    BACKGROUND: Previous studies have demonstrated that postsynaptic density protein-95 (PSD-95) is widely distributed in the central nervous system and is related to the development of the CNS and sensory signal transmission as well as acute or chronic nerve cell death following ischemic brain injury.OBJECTIVE: To semi-quantitatively determine the pathological changes of apoptotic facial neurons and the expression of PSD-95 in the facial nucleus following facial nerve injury of varying extents using immunohistochemical staining methods.DESIGN, TIME AND SETTING: Randomized, controlled animal experiments were performed in the Ultrasonic Institute of the Second Affiliated Hospital of Chongqing University of Medical Sciences from September to December 2007.MATERIALS: Sixty-five healthy, adult, Sprague-Dawley (SD) rats, both male and female, were used for this study. Rabbit anti-rat PSD-95 polyclonal antibody was purchased from Beijing Biosynthesis Biotechnology Co., Ltd.METHODS: SD rats were randomly assigned into a control group with five rats and three injured groups with 20 rats per group. Exposure, clamp and cut for bilateral facial nerve trunks were performed in the rats of the injury groups, and no injury was inflicted on the rats of the control group.MAIN OUTCOME MEASURES: The brainstems of all the rats were excised on days 1, 3, 7, and 14 post injury, and then the facial nuclei were stained with hematoxylin-eosin to observe any pathological changes due to apoptosis in facial neurons. PSD-95 expression in facial nuclei was detected by immunohistochemistry, and the number of PSD-95 positive cells was counted under a light microscope.RESULTS: The expression of PSD-95 in the facial nucleus and morphology of the facial neuron within the exposure group had no obvious changes at various points in time tested (P>0.05). However, the expressions of PSD-95 in