WorldWideScience

Sample records for facial identity recognition

  1. Facial Expression at Retrieval Affects Recognition of Facial Identity

    Directory of Open Access Journals (Sweden)

    Wenfeng eChen

    2015-06-01

    Full Text Available It is well known that memory can be modulated by emotional stimuli at the time of encoding and consolidation. For example, happy faces create better identity recognition than faces with certain other expressions. However, the influence of facial expression at the time of retrieval remains unknown in the literature. To separate the potential influence of expression at retrieval from its effects at earlier stages, we had participants learn neutral faces but manipulated facial expression at the time of memory retrieval in a standard old/new recognition task. The results showed a clear effect of facial expression, where happy test faces were identified more successfully than angry test faces. This effect is unlikely due to greater image similarity between the neutral learning face and the happy test face, because image analysis showed that the happy test faces are in fact less similar to the neutral learning faces relative to the angry test faces. In the second experiment, we investigated whether this emotional effect is influenced by the expression at the time of learning. We employed angry or happy faces as learning stimuli, and angry, happy, and neutral faces as test stimuli. The results showed that the emotional effect at retrieval is robust across different encoding conditions with happy or angry expressions. These findings indicate that emotional expressions affect the retrieval process in identity recognition, and identity recognition does not rely on emotional association between learning and test faces.

  2. [Prosopagnosia and facial expression recognition].

    Science.gov (United States)

    Koyama, Shinichi

    2014-04-01

    This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.

  3. Facial expression influences face identity recognition during the attentional blink.

    Science.gov (United States)

    Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J

    2014-12-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

  4. Visual Scan Paths and Recognition of Facial Identity in Autism Spectrum Disorder and Typical Development

    Science.gov (United States)

    Wilson, C. Ellie; Palermo, Romina; Brock, Jon

    2012-01-01

    Background Previous research suggests that many individuals with autism spectrum disorder (ASD) have impaired facial identity recognition, and also exhibit abnormal visual scanning of faces. Here, two hypotheses accounting for an association between these observations were tested: i) better facial identity recognition is associated with increased gaze time on the Eye region; ii) better facial identity recognition is associated with increased eye-movements around the face. Methodology and Principal Findings Eye-movements of 11 children with ASD and 11 age-matched typically developing (TD) controls were recorded whilst they viewed a series of faces, and then completed a two alternative forced-choice recognition memory test for the faces. Scores on the memory task were standardized according to age. In both groups, there was no evidence of an association between the proportion of time spent looking at the Eye region of faces and age-standardized recognition performance, thus the first hypothesis was rejected. However, the ‘Dynamic Scanning Index’ – which was incremented each time the participant saccaded into and out of one of the core-feature interest areas – was strongly associated with age-standardized face recognition scores in both groups, even after controlling for various other potential predictors of performance. Conclusions and Significance In support of the second hypothesis, results suggested that increased saccading between core-features was associated with more accurate face recognition ability, both in typical development and ASD. Causal directions of this relationship remain undetermined. PMID:22666378

  5. Reduced Reliance on Optimal Facial Information for Identity Recognition in Autism Spectrum Disorder

    Science.gov (United States)

    Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.

    2013-01-01

    Previous research into face processing in autism spectrum disorder (ASD) has revealed atypical biases toward particular facial information during identity recognition. Specifically, a focus on features (or high spatial frequencies [HSFs]) has been reported for both face and nonface processing in ASD. The current study investigated the development…

  6. Attention to Social Stimuli and Facial Identity Recognition Skills in Autism Spectrum Disorder

    Science.gov (United States)

    Wilson, C. E.; Brock, J.; Palermo, R.

    2010-01-01

    Background: Previous research suggests that individuals with autism spectrum disorder (ASD) have a reduced preference for viewing social stimuli in the environment and impaired facial identity recognition. Methods: Here, we directly tested a link between these two phenomena in 13 ASD children and 13 age-matched typically developing (TD) controls.…

  7. Association of impaired facial affect recognition with basic facial and visual processing deficits in schizophrenia.

    Science.gov (United States)

    Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue

    2009-06-15

    Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.

  8. [Neural representations of facial identity and its associative meaning].

    Science.gov (United States)

    Eifuku, Satoshi

    2012-07-01

    Since the discovery of "face cells" in the early 1980s, single-cell recording experiments in non-human primates have made significant contributions toward the elucidation of neural mechanisms underlying face perception and recognition. In this paper, we review the recent progress in face cell studies, including the recent remarkable findings of the face patches that are scattered around the anterior temporal cortical areas of monkeys. In particular, we focus on the neural representations of facial identity within these areas. The identification of faces requires both discrimination of facial identities and generalization across facial views. It has been indicated by some laboratories that the population of face cells found in the anterior ventral inferior temporal cortex of monkeys represent facial identity in a manner which is facial view-invariant. These findings suggest a relatively distributed representation that operates for facial identification. It has also been shown that certain individual neurons in the medial temporal lobe of humans represent view-invariant facial identity. This finding suggests a relatively sparse representation that may be employed for memory formation. Finally, we summarize our recent study, showing that the population of face cells in the anterior ventral inferior temporal cortex of monkeys that represent view-invariant facial identity, can also represent learned paired associations between an abstract picture and a particular facial identity, extending our understanding of the function of the anterior ventral inferior temporal cortex in the recognition of associative meanings of faces.

  9. Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia.

    Science.gov (United States)

    Palermo, Romina; Willis, Megan L; Rivolta, Davide; McKone, Elinor; Wilson, C Ellie; Calder, Andrew J

    2011-04-01

    We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and 'social'). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. Identity modulates short-term memory for facial emotion.

    Science.gov (United States)

    Galster, Murray; Kahana, Michael J; Wilson, Hugh R; Sekuler, Robert

    2009-12-01

    For some time, the relationship between processing of facial expression and facial identity has been in dispute. Using realistic synthetic faces, we reexamined this relationship for both perception and short-term memory. In Experiment 1, subjects tried to identify whether the emotional expression on a probe stimulus face matched the emotional expression on either of two remembered faces that they had just seen. The results showed that identity strongly influenced recognition short-term memory for emotional expression. In Experiment 2, subjects' similarity/dissimilarity judgments were transformed by multidimensional scaling (MDS) into a 2-D description of the faces' perceptual representations. Distances among stimuli in the MDS representation, which showed a strong linkage of emotional expression and facial identity, were good predictors of correct and false recognitions obtained previously in Experiment 1. The convergence of the results from Experiments 1 and 2 suggests that the overall structure and configuration of faces' perceptual representations may parallel their representation in short-term memory and that facial identity modulates the representation of facial emotion, both in perception and in memory. The stimuli from this study may be downloaded from http://cabn.psychonomic-journals.org/content/supplemental.

  11. Facial identity recognition in the broader autism phenotype.

    Directory of Open Access Journals (Sweden)

    C Ellie Wilson

    Full Text Available BACKGROUND: The 'broader autism phenotype' (BAP refers to the mild expression of autistic-like traits in the relatives of individuals with autism spectrum disorder (ASD. Establishing the presence of ASD traits provides insight into which traits are heritable in ASD. Here, the ability to recognise facial identity was tested in 33 parents of ASD children. METHODOLOGY AND RESULTS: In experiment 1, parents of ASD children completed the Cambridge Face Memory Test (CFMT, and a questionnaire assessing the presence of autistic personality traits. The parents, particularly the fathers, were impaired on the CFMT, but there were no associations between face recognition ability and autistic personality traits. In experiment 2, parents and probands completed equivalent versions of a simple test of face matching. On this task, the parents were not impaired relative to typically developing controls, however the proband group was impaired. Crucially, the mothers' face matching scores correlated with the probands', even when performance on an equivalent test of matching non-face stimuli was controlled for. CONCLUSIONS AND SIGNIFICANCE: Components of face recognition ability are impaired in some relatives of ASD individuals. Results suggest that face recognition skills are heritable in ASD, and genetic and environmental factors accounting for the pattern of heritability are discussed. In general, results demonstrate the importance of assessing the skill level in the proband when investigating particular characteristics of the BAP.

  12. Action recognition is sensitive to the identity of the actor.

    Science.gov (United States)

    Ferstl, Ylva; Bülthoff, Heinrich; de la Rosa, Stephan

    2017-09-01

    Recognizing who is carrying out an action is essential for successful human interaction. The cognitive mechanisms underlying this ability are little understood and have been subject of discussions in embodied approaches to action recognition. Here we examine one solution, that visual action recognition processes are at least partly sensitive to the actor's identity. We investigated the dependency between identity information and action related processes by testing the sensitivity of neural action recognition processes to clothing and facial identity information with a behavioral adaptation paradigm. Our results show that action adaptation effects are in fact modulated by both clothing information and the actor's facial identity. The finding demonstrates that neural processes underlying action recognition are sensitive to identity information (including facial identity) and thereby not exclusively tuned to actions. We suggest that such response properties are useful to help humans in knowing who carried out an action. Copyright © 2017 The Author(s). Published by Elsevier B.V. All rights reserved.

  13. Recognition of Face and Emotional Facial Expressions in Autism

    Directory of Open Access Journals (Sweden)

    Muhammed Tayyib Kadak

    2013-03-01

    Full Text Available Autism is a genetically transferred neurodevelopmental disorder characterized by severe and permanent deficits in many interpersonal relation areas like communication, social interaction and emotional responsiveness. Patients with autism have deficits in face recognition, eye contact and recognition of emotional expression. Both recognition of face and expression of facial emotion carried on face processing. Structural and functional impairment in fusiform gyrus, amygdala, superior temporal sulcus and other brain regions lead to deficits in recognition of face and facial emotion. Therefore studies suggest that face processing deficits resulted in problems in areas of social interaction and emotion in autism. Studies revealed that children with autism had problems in recognition of facial expression and used mouth region more than eye region. It was also shown that autistic patients interpreted ambiguous expressions as negative emotion. In autism, deficits related in various stages of face processing like detection of gaze, face identity, recognition of emotional expression were determined, so far. Social interaction impairments in autistic spectrum disorders originated from face processing deficits during the periods of infancy, childhood and adolescence. Recognition of face and expression of facial emotion could be affected either automatically by orienting towards faces after birth, or by “learning” processes in developmental periods such as identity and emotion processing. This article aimed to review neurobiological basis of face processing and recognition of emotional facial expressions during normal development and in autism.

  14. Heartbeat Signal from Facial Video for Biometric Recognition

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2015-01-01

    Different biometric traits such as face appearance and heartbeat signal from Electrocardiogram (ECG)/Phonocardiogram (PCG) are widely used in the human identity recognition. Recent advances in facial video based measurement of cardio-physiological parameters such as heartbeat rate, respiratory rate......, and blood volume pressure provide the possibility of extracting heartbeat signal from facial video instead of using obtrusive ECG or PCG sensors in the body. This paper proposes the Heartbeat Signal from Facial Video (HSFV) as a new biometric trait for human identity recognition, for the first time...... to the best of our knowledge. Feature extraction from the HSFV is accomplished by employing Radon transform on a waterfall model of the replicated HSFV. The pairwise Minkowski distances are obtained from the Radon image as the features. The authentication is accomplished by a decision tree based supervised...

  15. Memory deficits for facial identity in patients with amnestic mild cognitive impairment (MCI).

    Science.gov (United States)

    Savaskan, Egemen; Summermatter, Daniel; Schroeder, Clemens; Schächinger, Hartmut

    2018-01-01

    Faces are among the most relevant social stimuli revealing an encounter's identity and actual emotional state. Deficits in facial recognition may be an early sign of cognitive decline leading to social deficits. The main objective of the present study is to investigate if individuals with amnestic mild cognitive impairment show recognition deficits in facial identity. Thirty-seven individuals with amnestic mild cognitive impairment, multiple-domain (15 female; age: 75±8 yrs.) and forty-one healthy volunteers (24 female; age 71±6 yrs.) participated. All participants completed a human portrait memory test presenting unfamiliar faces with happy and angry emotional expressions. Five and thirty minutes later, old and new neutral faces were presented, and discrimination sensitivity (d') and response bias (C) were assessed as signal detection parameters of cued facial identity recognition. Memory performance was lower in amnestic mild cognitive impairment as compared to control subjects, mainly because of an altered response bias towards an increased false alarm rate (favoring false OLD ascription of NEW items). In both groups, memory performance declined between the early and later testing session, and was always better for acquired happy than angry faces. Facial identity memory is impaired in patients with amnestic mild cognitive impairment. Liberalization of the response bias may reflect a socially motivated compensatory mechanism maintaining an almost identical recognition hit rate of OLD faces in individuals with amnestic mild cognitive impairment.

  16. Contributions of feature shapes and surface cues to the recognition and neural representation of facial identity.

    Science.gov (United States)

    Andrews, Timothy J; Baseler, Heidi; Jenkins, Rob; Burton, A Mike; Young, Andrew W

    2016-10-01

    A full understanding of face recognition will involve identifying the visual information that is used to discriminate different identities and how this is represented in the brain. The aim of this study was to explore the importance of shape and surface properties in the recognition and neural representation of familiar faces. We used image morphing techniques to generate hybrid faces that mixed shape properties (more specifically, second order spatial configural information as defined by feature positions in the 2D-image) from one identity and surface properties from a different identity. Behavioural responses showed that recognition and matching of these hybrid faces was primarily based on their surface properties. These behavioural findings contrasted with neural responses recorded using a block design fMRI adaptation paradigm to test the sensitivity of Haxby et al.'s (2000) core face-selective regions in the human brain to the shape or surface properties of the face. The fusiform face area (FFA) and occipital face area (OFA) showed a lower response (adaptation) to repeated images of the same face (same shape, same surface) compared to different faces (different shapes, different surfaces). From the behavioural data indicating the critical contribution of surface properties to the recognition of identity, we predicted that brain regions responsible for familiar face recognition should continue to adapt to faces that vary in shape but not surface properties, but show a release from adaptation to faces that vary in surface properties but not shape. However, we found that the FFA and OFA showed an equivalent release from adaptation to changes in both shape and surface properties. The dissociation between the neural and perceptual responses suggests that, although they may play a role in the process, these core face regions are not solely responsible for the recognition of facial identity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Face identity recognition in autism spectrum disorders: a review of behavioral studies.

    Science.gov (United States)

    Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy

    2012-03-01

    Face recognition--the ability to recognize a person from their facial appearance--is essential for normal social interaction. Face recognition deficits have been implicated in the most common disorder of social interaction: autism. Here we ask: is face identity recognition in fact impaired in people with autism? Reviewing behavioral studies we find no strong evidence for a qualitative difference in how facial identity is processed between those with and without autism: markers of typical face identity recognition, such as the face inversion effect, seem to be present in people with autism. However, quantitatively--i.e., how well facial identity is remembered or discriminated--people with autism perform worse than typical individuals. This impairment is particularly clear in face memory and in face perception tasks in which a delay intervenes between sample and test, and less so in tasks with no memory demand. Although some evidence suggests that this deficit may be specific to faces, further evidence on this question is necessary. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Facial Recognition in Uncontrolled Conditions for Information Security

    Directory of Open Access Journals (Sweden)

    Qinghan Xiao

    2010-01-01

    Full Text Available With the increasing use of computers nowadays, information security is becoming an important issue for private companies and government organizations. Various security technologies have been developed, such as authentication, authorization, and auditing. However, once a user logs on, it is assumed that the system would be controlled by the same person. To address this flaw, we developed a demonstration system that uses facial recognition technology to periodically verify the identity of the user. If the authenticated user's face disappears, the system automatically performs a log-off or screen-lock operation. This paper presents our further efforts in developing image preprocessing algorithms and dealing with angled facial images. The objective is to improve the accuracy of facial recognition under uncontrolled conditions. To compare the results with others, the frontal pose subset of the Face Recognition Technology (FERET database was used for the test. The experiments showed that the proposed algorithms provided promising results.

  19. Facial Recognition in Uncontrolled Conditions for Information Security

    Science.gov (United States)

    Xiao, Qinghan; Yang, Xue-Dong

    2010-12-01

    With the increasing use of computers nowadays, information security is becoming an important issue for private companies and government organizations. Various security technologies have been developed, such as authentication, authorization, and auditing. However, once a user logs on, it is assumed that the system would be controlled by the same person. To address this flaw, we developed a demonstration system that uses facial recognition technology to periodically verify the identity of the user. If the authenticated user's face disappears, the system automatically performs a log-off or screen-lock operation. This paper presents our further efforts in developing image preprocessing algorithms and dealing with angled facial images. The objective is to improve the accuracy of facial recognition under uncontrolled conditions. To compare the results with others, the frontal pose subset of the Face Recognition Technology (FERET) database was used for the test. The experiments showed that the proposed algorithms provided promising results.

  20. The activation of visual memory for facial identity is task-dependent: evidence from human electrophysiology.

    Science.gov (United States)

    Zimmermann, Friederike G S; Eimer, Martin

    2014-05-01

    The question whether the recognition of individual faces is mandatory or task-dependent is still controversial. We employed the N250r component of the event-related potential as a marker of the activation of representations of facial identity in visual memory, in order to find out whether identity-related information from faces is encoded and maintained even when facial identity is task-irrelevant. Pairs of faces appeared in rapid succession, and the N250r was measured in response to repetitions of the same individual face, as compared to presentations of two different faces. In Experiment 1, an N250r was present in an identity matching task where identity information was relevant, but not when participants had to detect infrequent targets (inverted faces), and facial identity was task-irrelevant. This was the case not only for unfamiliar faces, but also for famous faces, suggesting that even famous face recognition is not as automatic as is often assumed. In Experiment 2, an N250r was triggered by repetitions of non-famous faces in a task where participants had to match the view of each face pair, and facial identity had to be ignored. This shows that when facial features have to be maintained in visual memory for a subsequent comparison, identity-related information is retained as well, even when it is irrelevant. Our results suggest that individual face recognition is neither fully mandatory nor completely task-dependent. Facial identity is encoded and maintained in tasks that involve visual memory for individual faces, regardless of the to-be-remembered feature. In tasks without this memory component, irrelevant visual identity information can be completely ignored. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Facial recognition software success rates for the identification of 3D surface reconstructed facial images: implications for patient privacy and security.

    Science.gov (United States)

    Mazura, Jan C; Juluru, Krishna; Chen, Joseph J; Morgan, Tara A; John, Majnu; Siegel, Eliot L

    2012-06-01

    Image de-identification has focused on the removal of textual protected health information (PHI). Surface reconstructions of the face have the potential to reveal a subject's identity even when textual PHI is absent. This study assessed the ability of a computer application to match research subjects' 3D facial reconstructions with conventional photographs of their face. In a prospective study, 29 subjects underwent CT scans of the head and had frontal digital photographs of their face taken. Facial reconstructions of each CT dataset were generated on a 3D workstation. In phase 1, photographs of the 29 subjects undergoing CT scans were added to a digital directory and tested for recognition using facial recognition software. In phases 2-4, additional photographs were added in groups of 50 to increase the pool of possible matches and the test for recognition was repeated. As an internal control, photographs of all subjects were tested for recognition against an identical photograph. Of 3D reconstructions, 27.5% were matched correctly to corresponding photographs (95% upper CL, 40.1%). All study subject photographs were matched correctly to identical photographs (95% lower CL, 88.6%). Of 3D reconstructions, 96.6% were recognized simply as a face by the software (95% lower CL, 83.5%). Facial recognition software has the potential to recognize features on 3D CT surface reconstructions and match these with photographs, with implications for PHI.

  2. Facial emotion recognition in paranoid schizophrenia and autism spectrum disorder.

    Science.gov (United States)

    Sachse, Michael; Schlitt, Sabine; Hainz, Daniela; Ciaramidaro, Angela; Walter, Henrik; Poustka, Fritz; Bölte, Sven; Freitag, Christine M

    2014-11-01

    Schizophrenia (SZ) and autism spectrum disorder (ASD) share deficits in emotion processing. In order to identify convergent and divergent mechanisms, we investigated facial emotion recognition in SZ, high-functioning ASD (HFASD), and typically developed controls (TD). Different degrees of task difficulty and emotion complexity (face, eyes; basic emotions, complex emotions) were used. Two Benton tests were implemented in order to elicit potentially confounding visuo-perceptual functioning and facial processing. Nineteen participants with paranoid SZ, 22 with HFASD and 20 TD were included, aged between 14 and 33 years. Individuals with SZ were comparable to TD in all obtained emotion recognition measures, but showed reduced basic visuo-perceptual abilities. The HFASD group was impaired in the recognition of basic and complex emotions compared to both, SZ and TD. When facial identity recognition was adjusted for, group differences remained for the recognition of complex emotions only. Our results suggest that there is a SZ subgroup with predominantly paranoid symptoms that does not show problems in face processing and emotion recognition, but visuo-perceptual impairments. They also confirm the notion of a general facial and emotion recognition deficit in HFASD. No shared emotion recognition deficit was found for paranoid SZ and HFASD, emphasizing the differential cognitive underpinnings of both disorders. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Role of temporal processing stages by inferior temporal neurons in facial recognition

    Directory of Open Access Journals (Sweden)

    Yasuko eSugase-Miyamoto

    2011-06-01

    Full Text Available In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses.In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of

  4. Facial Expression Recognition

    NARCIS (Netherlands)

    Pantic, Maja; Li, S.; Jain, A.

    2009-01-01

    Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial

  5. Emotional facial expressions differentially influence predictions and performance for face recognition.

    Science.gov (United States)

    Nomi, Jason S; Rhodes, Matthew G; Cleary, Anne M

    2013-01-01

    This study examined how participants' predictions of future memory performance are influenced by emotional facial expressions. Participants made judgements of learning (JOLs) predicting the likelihood that they would correctly identify a face displaying a happy, angry, or neutral emotional expression in a future two-alternative forced-choice recognition test of identity (i.e., recognition that a person's face was seen before). JOLs were higher for studied faces with happy and angry emotional expressions than for neutral faces. However, neutral test faces with studied neutral expressions had significantly higher identity recognition rates than neutral test faces studied with happy or angry expressions. Thus, these data are the first to demonstrate that people believe happy and angry emotional expressions will lead to better identity recognition in the future relative to neutral expressions. This occurred despite the fact that neutral expressions elicited better identity recognition than happy and angry expressions. These findings contribute to the growing literature examining the interaction of cognition and emotion.

  6. Facial recognition in education system

    Science.gov (United States)

    Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish

    2017-11-01

    Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.

  7. [Measuring impairment of facial affects recognition in schizophrenia. Preliminary study of the facial emotions recognition task (TREF)].

    Science.gov (United States)

    Gaudelus, B; Virgile, J; Peyroux, E; Leleu, A; Baudouin, J-Y; Franck, N

    2015-06-01

    The impairment of social cognition, including facial affects recognition, is a well-established trait in schizophrenia, and specific cognitive remediation programs focusing on facial affects recognition have been developed by different teams worldwide. However, even though social cognitive impairments have been confirmed, previous studies have also shown heterogeneity of the results between different subjects. Therefore, assessment of personal abilities should be measured individually before proposing such programs. Most research teams apply tasks based on facial affects recognition by Ekman et al. or Gur et al. However, these tasks are not easily applicable in a clinical exercise. Here, we present the Facial Emotions Recognition Test (TREF), which is designed to identify facial affects recognition impairments in a clinical practice. The test is composed of 54 photos and evaluates abilities in the recognition of six universal emotions (joy, anger, sadness, fear, disgust and contempt). Each of these emotions is represented with colored photos of 4 different models (two men and two women) at nine intensity levels from 20 to 100%. Each photo is presented during 10 seconds; no time limit for responding is applied. The present study compared the scores of the TREF test in a sample of healthy controls (64 subjects) and people with stabilized schizophrenia (45 subjects) according to the DSM IV-TR criteria. We analysed global scores for all emotions, as well as sub scores for each emotion between these two groups, taking into account gender differences. Our results were coherent with previous findings. Applying TREF, we confirmed an impairment in facial affects recognition in schizophrenia by showing significant differences between the two groups in their global results (76.45% for healthy controls versus 61.28% for people with schizophrenia), as well as in sub scores for each emotion except for joy. Scores for women were significantly higher than for men in the population

  8. [Neurological disease and facial recognition].

    Science.gov (United States)

    Kawamura, Mitsuru; Sugimoto, Azusa; Kobayakawa, Mutsutaka; Tsuruya, Natsuko

    2012-07-01

    To discuss the neurological basis of facial recognition, we present our case reports of impaired recognition and a review of previous literature. First, we present a case of infarction and discuss prosopagnosia, which has had a large impact on face recognition research. From a study of patient symptoms, we assume that prosopagnosia may be caused by unilateral right occipitotemporal lesion and right cerebral dominance of facial recognition. Further, circumscribed lesion and degenerative disease may also cause progressive prosopagnosia. Apperceptive prosopagnosia is observed in patients with posterior cortical atrophy (PCA), pathologically considered as Alzheimer's disease, and associative prosopagnosia in frontotemporal lobar degeneration (FTLD). Second, we discuss face recognition as part of communication. Patients with Parkinson disease show social cognitive impairments, such as difficulty in facial expression recognition and deficits in theory of mind as detected by the reading the mind in the eyes test. Pathological and functional imaging studies indicate that social cognitive impairment in Parkinson disease is possibly related to damages in the amygdalae and surrounding limbic system. The social cognitive deficits can be observed in the early stages of Parkinson disease, and even in the prodromal stage, for example, patients with rapid eye movement (REM) sleep behavior disorder (RBD) show impairment in facial expression recognition. Further, patients with myotonic dystrophy type 1 (DM 1), which is a multisystem disease that mainly affects the muscles, show social cognitive impairment similar to that of Parkinson disease. Our previous study showed that facial expression recognition impairment of DM 1 patients is associated with lesion in the amygdalae and insulae. Our study results indicate that behaviors and personality traits in DM 1 patients, which are revealed by social cognitive impairment, are attributable to dysfunction of the limbic system.

  9. Fusing Facial Features for Face Recognition

    Directory of Open Access Journals (Sweden)

    Jamal Ahmad Dargham

    2012-06-01

    Full Text Available Face recognition is an important biometric method because of its potential applications in many fields, such as access control, surveillance, and human-computer interaction. In this paper, a face recognition system that fuses the outputs of three face recognition systems based on Gabor jets is presented. The first system uses the magnitude, the second uses the phase, and the third uses the phase-weighted magnitude of the jets. The jets are generated from facial landmarks selected using three selection methods. It was found out that fusing the facial features gives better recognition rate than either facial feature used individually regardless of the landmark selection method.

  10. Facial Expression Recognition Through Machine Learning

    Directory of Open Access Journals (Sweden)

    Nazia Perveen

    2015-08-01

    Full Text Available Facial expressions communicate non-verbal cues which play an important role in interpersonal relations. Automatic recognition of facial expressions can be an important element of normal human-machine interfaces it might likewise be utilized as a part of behavioral science and in clinical practice. In spite of the fact that people perceive facial expressions for all intents and purposes immediately solid expression recognition by machine is still a challenge. From the point of view of automatic recognition a facial expression can be considered to comprise of disfigurements of the facial parts and their spatial relations or changes in the faces pigmentation. Research into automatic recognition of the facial expressions addresses the issues encompassing the representation and arrangement of static or dynamic qualities of these distortions or face pigmentation. We get results by utilizing the CVIPtools. We have taken train data set of six facial expressions of three persons and for train data set purpose we have total border mask sample 90 and 30 border mask sample for test data set purpose and we use RST- Invariant features and texture features for feature analysis and then classified them by using k- Nearest Neighbor classification algorithm. The maximum accuracy is 90.

  11. Incongruence Between Observers’ and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli

    Directory of Open Access Journals (Sweden)

    Tanja S. H. Wingenbach

    2018-06-01

    Full Text Available According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a explicit imitation of viewed facial emotional expressions (stimulus-congruent condition, (b pen-holding with the lips (stimulus-incongruent condition, and (c passive viewing (control condition. It was hypothesised that (1 experimental condition (a and (b result in greater facial muscle activity than (c, (2 experimental condition (a increases emotion recognition accuracy from others’ faces compared to (c, (3 experimental condition (b lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c. Participants (42 males, 42 females underwent a facial emotion recognition experiment (ADFES-BIV while electromyography (EMG was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

  12. Incongruence Between Observers' and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli.

    Science.gov (United States)

    Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

  13. Incongruence Between Observers’ and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli

    Science.gov (United States)

    Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240

  14. A motivational determinant of facial emotion recognition: regulatory focus affects recognition of emotions in faces.

    Science.gov (United States)

    Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G; Scheiter, Katharina; Jarodzka, Halszka

    2014-01-01

    Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition.

  15. Own- and Other-Race Face Identity Recognition in Children: The Effects of Pose and Feature Composition

    Science.gov (United States)

    Anzures, Gizelle; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; de Viviés, Xavier; Lee, Kang

    2014-01-01

    We used a matching-to-sample task and manipulated facial pose and feature composition to examine the other-race effect (ORE) in face identity recognition between 5 and 10 years of age. Overall, the present findings provide a genuine measure of own- and other-race face identity recognition in children that is independent of photographic and image…

  16. Impaired recognition of happy facial expressions in bipolar disorder.

    Science.gov (United States)

    Lawlor-Savage, Linette; Sponheim, Scott R; Goghari, Vina M

    2014-08-01

    The ability to accurately judge facial expressions is important in social interactions. Individuals with bipolar disorder have been found to be impaired in emotion recognition; however, the specifics of the impairment are unclear. This study investigated whether facial emotion recognition difficulties in bipolar disorder reflect general cognitive, or emotion-specific, impairments. Impairment in the recognition of particular emotions and the role of processing speed in facial emotion recognition were also investigated. Clinically stable bipolar patients (n = 17) and healthy controls (n = 50) judged five facial expressions in two presentation types, time-limited and self-paced. An age recognition condition was used as an experimental control. Bipolar patients' overall facial recognition ability was unimpaired. However, patients' specific ability to judge happy expressions under time constraints was impaired. Findings suggest a deficit in happy emotion recognition impacted by processing speed. Given the limited sample size, further investigation with a larger patient sample is warranted.

  17. Dynamic facial expression recognition based on geometric and texture features

    Science.gov (United States)

    Li, Ming; Wang, Zengfu

    2018-04-01

    Recently, dynamic facial expression recognition in videos has attracted growing attention. In this paper, we propose a novel dynamic facial expression recognition method by using geometric and texture features. In our system, the facial landmark movements and texture variations upon pairwise images are used to perform the dynamic facial expression recognition tasks. For one facial expression sequence, pairwise images are created between the first frame and each of its subsequent frames. Integration of both geometric and texture features further enhances the representation of the facial expressions. Finally, Support Vector Machine is used for facial expression recognition. Experiments conducted on the extended Cohn-Kanade database show that our proposed method can achieve a competitive performance with other methods.

  18. Regression-based Multi-View Facial Expression Recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja

    2010-01-01

    We present a regression-based scheme for multi-view facial expression recognition based on 2蚠D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a

  19. Facial Action Units Recognition: A Comparative Study

    NARCIS (Netherlands)

    Popa, M.C.; Rothkrantz, L.J.M.; Wiggers, P.; Braspenning, R.A.C.; Shan, C.

    2011-01-01

    Many approaches to facial expression recognition focus on assessing the six basic emotions (anger, disgust, happiness, fear, sadness, and surprise). Real-life situations proved to produce many more subtle facial expressions. A reliable way of analyzing the facial behavior is the Facial Action Coding

  20. Facial identity and facial expression are initially integrated at visual perceptual stages of face processing.

    Science.gov (United States)

    Fisher, Katie; Towler, John; Eimer, Martin

    2016-01-08

    It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Tensor Rank Preserving Discriminant Analysis for Facial Recognition.

    Science.gov (United States)

    Tao, Dapeng; Guo, Yanan; Li, Yaotang; Gao, Xinbo

    2017-10-12

    Facial recognition, one of the basic topics in computer vision and pattern recognition, has received substantial attention in recent years. However, for those traditional facial recognition algorithms, the facial images are reshaped to a long vector, thereby losing part of the original spatial constraints of each pixel. In this paper, a new tensor-based feature extraction algorithm termed tensor rank preserving discriminant analysis (TRPDA) for facial image recognition is proposed; the proposed method involves two stages: in the first stage, the low-dimensional tensor subspace of the original input tensor samples was obtained; in the second stage, discriminative locality alignment was utilized to obtain the ultimate vector feature representation for subsequent facial recognition. On the one hand, the proposed TRPDA algorithm fully utilizes the natural structure of the input samples, and it applies an optimization criterion that can directly handle the tensor spectral analysis problem, thereby decreasing the computation cost compared those traditional tensor-based feature selection algorithms. On the other hand, the proposed TRPDA algorithm extracts feature by finding a tensor subspace that preserves most of the rank order information of the intra-class input samples. Experiments on the three facial databases are performed here to determine the effectiveness of the proposed TRPDA algorithm.

  2. Facial Expression Recognition Based on TensorFlow Platform

    Directory of Open Access Journals (Sweden)

    Xia Xiao-Ling

    2017-01-01

    Full Text Available Facial expression recognition have a wide range of applications in human-machine interaction, pattern recognition, image understanding, machine vision and other fields. Recent years, it has gradually become a hot research. However, different people have different ways of expressing their emotions, and under the influence of brightness, background and other factors, there are some difficulties in facial expression recognition. In this paper, based on the Inception-v3 model of TensorFlow platform, we use the transfer learning techniques to retrain facial expression dataset (The Extended Cohn-Kanade dataset, which can keep the accuracy of recognition and greatly reduce the training time.

  3. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  4. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2011-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions.

  5. Static facial expression recognition with convolution neural networks

    Science.gov (United States)

    Zhang, Feng; Chen, Zhong; Ouyang, Chao; Zhang, Yifei

    2018-03-01

    Facial expression recognition is a currently active research topic in the fields of computer vision, pattern recognition and artificial intelligence. In this paper, we have developed a convolutional neural networks (CNN) for classifying human emotions from static facial expression into one of the seven facial emotion categories. We pre-train our CNN model on the combined FER2013 dataset formed by train, validation and test set and fine-tune on the extended Cohn-Kanade database. In order to reduce the overfitting of the models, we utilized different techniques including dropout and batch normalization in addition to data augmentation. According to the experimental result, our CNN model has excellent classification performance and robustness for facial expression recognition.

  6. Facial expression recognition based on improved deep belief networks

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.

  7. Neuroticism and facial emotion recognition in healthy adults

    NARCIS (Netherlands)

    Andric, Sanja; Maric, Nadja P.; Knezevic, Goran; Mihaljevic, Marina; Mirjanic, Tijana; Velthorst, Eva; van Os, Jim

    2016-01-01

    The aim of the present study was to examine whether healthy individuals with higher levels of neuroticism, a robust independent predictor of psychopathology, exhibit altered facial emotion recognition performance. Facial emotion recognition accuracy was investigated in 104 healthy adults using the

  8. Facial Emotions Recognition using Gabor Transform and Facial Animation Parameters with Neural Networks

    Science.gov (United States)

    Harit, Aditya; Joshi, J. C., Col; Gupta, K. K.

    2018-03-01

    The paper proposed an automatic facial emotion recognition algorithm which comprises of two main components: feature extraction and expression recognition. The algorithm uses a Gabor filter bank on fiducial points to find the facial expression features. The resulting magnitudes of Gabor transforms, along with 14 chosen FAPs (Facial Animation Parameters), compose the feature space. There are two stages: the training phase and the recognition phase. Firstly, for the present 6 different emotions, the system classifies all training expressions in 6 different classes (one for each emotion) in the training stage. In the recognition phase, it recognizes the emotion by applying the Gabor bank to a face image, then finds the fiducial points, and then feeds it to the trained neural architecture.

  9. The Facial Expression Action Stimulus Test. A test battery for the assessment of face memory, face and object perception, configuration processing and facial expression recognition

    Directory of Open Access Journals (Sweden)

    Beatrice eDe Gelder

    2015-10-01

    Full Text Available There are many ways to assess face perception skills. In this study, we describe a novel task battery FEAST (Facial Expression Action Stimulus Test developed to test recognition of identity and expressions of human faces as well as stimulus control categories. The FEAST consists of a neutral and emotional face memory task, a face and object identity matching task, a face and house part-to-whole matching task, and a human and animal facial expression matching task. The identity and part-to-whole matching tasks contain both upright and inverted conditions. The results provide reference data of a healthy sample of controls in two age groups for future users of the FEAST.

  10. Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson’s Disease

    Science.gov (United States)

    Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul

    2016-01-01

    According to embodied simulation theory, understanding other people’s emotions is fostered by facial mimicry. However, studies assessing the effect of facial mimicry on the recognition of emotion are still controversial. In Parkinson’s disease (PD), one of the most distinctive clinical features is facial amimia, a reduction in facial expressiveness, but patients also show emotional disturbances. The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral). Our results evidenced a significant decrease in facial mimicry for joy in PD, essentially linked to the absence of reaction of the zygomaticus major and the orbicularis oculi muscles in response to happy avatars, whereas facial mimicry for expressions of anger was relatively preserved. We also confirmed that PD patients were less accurate in recognizing positive and neutral facial expressions and highlighted a beneficial effect of facial mimicry on the recognition of emotion. We thus provide additional arguments for embodied simulation theory suggesting that facial mimicry is a potential lever for therapeutic actions in PD even if it seems not to be necessarily required in recognizing emotion as such. PMID:27467393

  11. The review and results of different methods for facial recognition

    Science.gov (United States)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  12. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  13. Influences on Facial Emotion Recognition in Deaf Children

    Science.gov (United States)

    Sidera, Francesc; Amadó, Anna; Martínez, Laura

    2017-01-01

    This exploratory research is aimed at studying facial emotion recognition abilities in deaf children and how they relate to linguistic skills and the characteristics of deafness. A total of 166 participants (75 deaf) aged 3-8 years were administered the following tasks: facial emotion recognition, naming vocabulary and cognitive ability. The…

  14. Frame-Based Facial Expression Recognition Using Geometrical Features

    Directory of Open Access Journals (Sweden)

    Anwar Saeed

    2014-01-01

    Full Text Available To improve the human-computer interaction (HCI to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness, with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.

  15. Comparison of emotion recognition from facial expression and music.

    Science.gov (United States)

    Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.

  16. Interest and attention in facial recognition.

    Science.gov (United States)

    Burgess, Melinda C R; Weaver, George E

    2003-04-01

    When applied to facial recognition, the levels of processing paradigm has yielded consistent results: faces processed in deep conditions are recognized better than faces processed under shallow conditions. However, there are multiple explanations for this occurrence. The own-race advantage in facial recognition, the tendency to recognize faces from one's own race better than faces from another race, is also consistently shown but not clearly explained. This study was designed to test the hypothesis that the levels of processing findings in facial recognition are a result of interest and attention, not differences in processing. This hypothesis was tested for both own and other faces with 105 Caucasian general psychology students. Levels of processing was manipulated as a between-subjects variable; students were asked to answer one of four types of study questions, e.g., "deep" or "shallow" processing questions, while viewing the study faces. Students' recognition of a subset of previously presented Caucasian and African-American faces from a test-set with an equal number of distractor faces was tested. They indicated their interest in and attention to the task. The typical levels of processing effect was observed with better recognition performance in the deep conditions than in the shallow conditions for both own- and other-race faces. The typical own-race advantage was also observed regardless of level of processing condition. For both own- and other-race faces, level of processing explained a significant portion of the recognition variance above and beyond what was explained by interest in and attention to the task.

  17. Facial Emotion Recognition in Schizophrenia: The Impact of Gender

    OpenAIRE

    Erol, Alm?la; Putgul, Gulperi; Kosger, Ferdi; Ersoy, Bilal

    2013-01-01

    Objective Previous studies reported gender differences for facial emotion recognition in healthy people, with women performing better than men. Few studies that examined gender differences for facial emotion recognition in schizophrenia brought out inconsistent findings. The aim of this study is to investigate gender differences for facial emotion identification and discrimination abilities in patients with schizophrenia. Methods 35 female and 35 male patients with schizophrenia, along with 3...

  18. Neurobiological mechanisms associated with facial affect recognition deficits after traumatic brain injury.

    Science.gov (United States)

    Neumann, Dawn; McDonald, Brenna C; West, John; Keiski, Michelle A; Wang, Yang

    2016-06-01

    The neurobiological mechanisms that underlie facial affect recognition deficits after traumatic brain injury (TBI) have not yet been identified. Using functional magnetic resonance imaging (fMRI), study aims were to 1) determine if there are differences in brain activation during facial affect processing in people with TBI who have facial affect recognition impairments (TBI-I) relative to people with TBI and healthy controls who do not have facial affect recognition impairments (TBI-N and HC, respectively); and 2) identify relationships between neural activity and facial affect recognition performance. A facial affect recognition screening task performed outside the scanner was used to determine group classification; TBI patients who performed greater than one standard deviation below normal performance scores were classified as TBI-I, while TBI patients with normal scores were classified as TBI-N. An fMRI facial recognition paradigm was then performed within the 3T environment. Results from 35 participants are reported (TBI-I = 11, TBI-N = 12, and HC = 12). For the fMRI task, TBI-I and TBI-N groups scored significantly lower than the HC group. Blood oxygenation level-dependent (BOLD) signals for facial affect recognition compared to a baseline condition of viewing a scrambled face, revealed lower neural activation in the right fusiform gyrus (FG) in the TBI-I group than the HC group. Right fusiform gyrus activity correlated with accuracy on the facial affect recognition tasks (both within and outside the scanner). Decreased FG activity suggests facial affect recognition deficits after TBI may be the result of impaired holistic face processing. Future directions and clinical implications are discussed.

  19. Facial Emotion Recognition Using Context Based Multimodal Approach

    Directory of Open Access Journals (Sweden)

    Priya Metri

    2011-12-01

    Full Text Available Emotions play a crucial role in person to person interaction. In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers. The ability to understand human emotions is desirable for the computer in several applications especially by observing facial expressions. This paper explores a ways of human-computer interaction that enable the computer to be more aware of the user’s emotional expressions we present a approach for the emotion recognition from a facial expression, hand and body posture. Our model uses multimodal emotion recognition system in which we use two different models for facial expression recognition and for hand and body posture recognition and then combining the result of both classifiers using a third classifier which give the resulting emotion . Multimodal system gives more accurate result than a signal or bimodal system

  20. Plastic surgery and the biometric e-passport: implications for facial recognition.

    Science.gov (United States)

    Ologunde, Rele

    2015-04-01

    This correspondence comments on the challenges of plastic reconstructive and aesthetic surgery on the facial recognition algorithms employed by biometric passports. The limitations of facial recognition technology in patients who have undergone facial plastic surgery are also discussed. Finally, the advice of the UK HM passport office to people who undergo facial surgery is reported.

  1. Facial expression recognition in the wild based on multimodal texture features

    Science.gov (United States)

    Sun, Bo; Li, Liandong; Zhou, Guoyan; He, Jun

    2016-11-01

    Facial expression recognition in the wild is a very challenging task. We describe our work in static and continuous facial expression recognition in the wild. We evaluate the recognition results of gray deep features and color deep features, and explore the fusion of multimodal texture features. For the continuous facial expression recognition, we design two temporal-spatial dense scale-invariant feature transform (SIFT) features and combine multimodal features to recognize expression from image sequences. For the static facial expression recognition based on video frames, we extract dense SIFT and some deep convolutional neural network (CNN) features, including our proposed CNN architecture. We train linear support vector machine and partial least squares classifiers for those kinds of features on the static facial expression in the wild (SFEW) and acted facial expression in the wild (AFEW) dataset, and we propose a fusion network to combine all the extracted features at decision level. The final achievement we gained is 56.32% on the SFEW testing set and 50.67% on the AFEW validation set, which are much better than the baseline recognition rates of 35.96% and 36.08%.

  2. Facial Affect Recognition and Social Anxiety in Preschool Children

    Science.gov (United States)

    Ale, Chelsea M.; Chorney, Daniel B.; Brice, Chad S.; Morris, Tracy L.

    2010-01-01

    Research relating anxiety and facial affect recognition has focused mostly on school-aged children and adults and has yielded mixed results. The current study sought to demonstrate an association among behavioural inhibition and parent-reported social anxiety, shyness, social withdrawal and facial affect recognition performance in 30 children,…

  3. Cognitive penetrability and emotion recognition in human facial expressions

    Directory of Open Access Journals (Sweden)

    Francesco eMarchi

    2015-06-01

    Full Text Available Do our background beliefs, desires, and mental images influence our perceptual experience of the emotions of others? In this paper, we will address the possibility of cognitive penetration of perceptual experience in the domain of social cognition. In particular, we focus on emotion recognition based on the visual experience of facial expressions. After introducing the current debate on cognitive penetration, we review examples of perceptual adaptation for facial expressions of emotion. This evidence supports the idea that facial expressions are perceptually processed as wholes. That is, the perceptual system integrates lower-level facial features, such as eyebrow orientation, mouth angle etc., into facial compounds. We then present additional experimental evidence showing that in some cases, emotion recognition on the basis of facial expression is sensitive to and modified by the background knowledge of the subject. We argue that such sensitivity is best explained as a difference in the visual experience of the facial expression, not just as a modification of the judgment based on this experience. The difference in experience is characterized as the result of the interference of background knowledge with the perceptual integration process for faces. Thus, according to the best explanation, we have to accept cognitive penetration in some cases of emotion recognition. Finally, we highlight a recent model of social vision in order to propose a mechanism for cognitive penetration used in the face-based recognition of emotion.

  4. Relation between facial affect recognition and configural face processing in antipsychotic-free schizophrenia.

    Science.gov (United States)

    Fakra, Eric; Jouve, Elisabeth; Guillaume, Fabrice; Azorin, Jean-Michel; Blin, Olivier

    2015-03-01

    Deficit in facial affect recognition is a well-documented impairment in schizophrenia, closely connected to social outcome. This deficit could be related to psychopathology, but also to a broader dysfunction in processing facial information. In addition, patients with schizophrenia inadequately use configural information-a type of processing that relies on spatial relationships between facial features. To date, no study has specifically examined the link between symptoms and misuse of configural information in the deficit in facial affect recognition. Unmedicated schizophrenia patients (n = 30) and matched healthy controls (n = 30) performed a facial affect recognition task and a face inversion task, which tests aptitude to rely on configural information. In patients, regressions were carried out between facial affect recognition, symptom dimensions and inversion effect. Patients, compared with controls, showed a deficit in facial affect recognition and a lower inversion effect. Negative symptoms and lower inversion effect could account for 41.2% of the variance in facial affect recognition. This study confirms the presence of a deficit in facial affect recognition, and also of dysfunctional manipulation in configural information in antipsychotic-free patients. Negative symptoms and poor processing of configural information explained a substantial part of the deficient recognition of facial affect. We speculate that this deficit may be caused by several factors, among which independently stand psychopathology and failure in correctly manipulating configural information. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  5. [Recognition of facial emotions and theory of mind in schizophrenia: could the theory of mind deficit be due to the non-recognition of facial emotions?].

    Science.gov (United States)

    Besche-Richard, C; Bourrin-Tisseron, A; Olivier, M; Cuervo-Lombard, C-V; Limosin, F

    2012-06-01

    The deficits of recognition of facial emotions and attribution of mental states are now well-documented in schizophrenic patients. However, we don't clearly know about the link between these two complex cognitive functions, especially in schizophrenia. In this study, we attempted to test the link between the recognition of facial emotions and the capacities of mentalization, notably the attribution of beliefs, in health and schizophrenic participants. We supposed that the level of performance of recognition of facial emotions, compared to the working memory and executive functioning, was the best predictor of the capacities to attribute a belief. Twenty schizophrenic participants according to DSM-IVTR (mean age: 35.9 years, S.D. 9.07; mean education level: 11.15 years, S.D. 2.58) clinically stabilized, receiving neuroleptic or antipsychotic medication participated in the study. They were matched on age (mean age: 36.3 years, S.D. 10.9) and educational level (mean educational level: 12.10, S.D. 2.25) with 30 matched healthy participants. All the participants were evaluated with a pool of tasks testing the recognition of facial emotions (the faces of Baron-Cohen), the attribution of beliefs (two stories of first order and two stories of second order), the working memory (the digit span of the WAIS-III and the Corsi test) and the executive functioning (Trail Making Test A et B, Wisconsin Card Sorting Test brief version). Comparing schizophrenic and healthy participants, our results confirmed a difference between the performances of the recognition of facial emotions and those of the attribution of beliefs. The result of the simple linear regression showed that the recognition of facial emotions, compared to the performances of working memory and executive functioning, was the best predictor of the performances in the theory of mind stories. Our results confirmed, in a sample of schizophrenic patients, the deficits in the recognition of facial emotions and in the

  6. Changing facial affect recognition in schizophrenia: Effects of training on brain dynamics

    Directory of Open Access Journals (Sweden)

    Petia Popova

    2014-01-01

    Full Text Available Deficits in social cognition including facial affect recognition and their detrimental effects on functional outcome are well established in schizophrenia. Structured training can have substantial effects on social cognitive measures including facial affect recognition. Elucidating training effects on cortical mechanisms involved in facial affect recognition may identify causes of dysfunctional facial affect recognition in schizophrenia and foster remediation strategies. In the present study, 57 schizophrenia patients were randomly assigned to (a computer-based facial affect training that focused on affect discrimination and working memory in 20 daily 1-hour sessions, (b similarly intense, targeted cognitive training on auditory-verbal discrimination and working memory, or (c treatment as usual. Neuromagnetic activity was measured before and after training during a dynamic facial affect recognition task (5 s videos showing human faces gradually changing from neutral to fear or to happy expressions. Effects on 10–13 Hz (alpha power during the transition from neutral to emotional expressions were assessed via MEG based on previous findings that alpha power increase is related to facial affect recognition and is smaller in schizophrenia than in healthy subjects. Targeted affect training improved overt performance on the training tasks. Moreover, alpha power increase during the dynamic facial affect recognition task was larger after affect training than after treatment-as-usual, though similar to that after targeted perceptual–cognitive training, indicating somewhat nonspecific benefits. Alpha power modulation was unrelated to general neuropsychological test performance, which improved in all groups. Results suggest that specific neural processes supporting facial affect recognition, evident in oscillatory phenomena, are modifiable. This should be considered when developing remediation strategies targeting social cognition in schizophrenia.

  7. A study on facial expressions recognition

    Science.gov (United States)

    Xu, Jingjing

    2017-09-01

    In terms of communication, postures and facial expressions of such feelings like happiness, anger and sadness play important roles in conveying information. With the development of the technology, recently a number of algorithms dealing with face alignment, face landmark detection, classification, facial landmark localization and pose estimation have been put forward. However, there are a lot of challenges and problems need to be fixed. In this paper, a few technologies have been concluded and analyzed, and they all relate to handling facial expressions recognition and poses like pose-indexed based multi-view method for face alignment, robust facial landmark detection under significant head pose and occlusion, partitioning the input domain for classification, robust statistics face formalization.

  8. Facial emotion recognition in patients with focal and diffuse axonal injury.

    Science.gov (United States)

    Yassin, Walid; Callahan, Brandy L; Ubukata, Shiho; Sugihara, Genichi; Murai, Toshiya; Ueda, Keita

    2017-01-01

    Facial emotion recognition impairment has been well documented in patients with traumatic brain injury. Studies exploring the neural substrates involved in such deficits have implicated specific grey matter structures (e.g. orbitofrontal regions), as well as diffuse white matter damage. Our study aims to clarify whether different types of injuries (i.e. focal vs. diffuse) will lead to different types of impairments on facial emotion recognition tasks, as no study has directly compared these patients. The present study examined performance and response patterns on a facial emotion recognition task in 14 participants with diffuse axonal injury (DAI), 14 with focal injury (FI) and 22 healthy controls. We found that, overall, participants with FI and DAI performed more poorly than controls on the facial emotion recognition task. Further, we observed comparable emotion recognition performance in participants with FI and DAI, despite differences in the nature and distribution of their lesions. However, the rating response pattern between the patient groups was different. This is the first study to show that pure DAI, without gross focal lesions, can independently lead to facial emotion recognition deficits and that rating patterns differ depending on the type and location of trauma.

  9. Support vector machine-based facial-expression recognition method combining shape and appearance

    Science.gov (United States)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  10. Cross-cultural differences and similarities underlying other-race effects for facial identity and expression.

    Science.gov (United States)

    Yan, Xiaoqian; Andrews, Timothy J; Jenkins, Rob; Young, Andrew W

    2016-01-01

    Perceptual advantages for own-race compared to other-race faces have been demonstrated for the recognition of facial identity and expression. However, these effects have not been investigated in the same study with measures that can determine the extent of cross-cultural agreement as well as differences. To address this issue, we used a photo sorting task in which Chinese and Caucasian participants were asked to sort photographs of Chinese or Caucasian faces by identity or by expression. This paradigm matched the task demands of identity and expression recognition and avoided constrained forced-choice or verbal labelling requirements. Other-race effects of comparable magnitude were found across the identity and expression tasks. Caucasian participants made more confusion errors for the identities and expressions of Chinese than Caucasian faces, while Chinese participants made more confusion errors for the identities and expressions of Caucasian than Chinese faces. However, analyses of the patterns of responses across groups of participants revealed a considerable amount of underlying cross-cultural agreement. These findings suggest that widely repeated claims that members of other cultures "all look the same" overstate the cultural differences.

  11. Meta-Analysis of the First Facial Expression Recognition Challenge

    NARCIS (Netherlands)

    Valstar, M.F.; Mehu, M.; Jiang, Bihan; Pantic, Maja; Scherer, K.

    Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability

  12. More Pronounced Deficits in Facial Emotion Recognition for Schizophrenia than Bipolar Disorder

    Science.gov (United States)

    Goghari, Vina M; Sponheim, Scott R

    2012-01-01

    Schizophrenia and bipolar disorder are typically separated in diagnostic systems. Behavioural, cognitive, and brain abnormalities associated with each disorder nonetheless overlap. We evaluated the diagnostic specificity of facial emotion recognition deficits in schizophrenia and bipolar disorder to determine whether select aspects of emotion recognition differed for the two disorders. The investigation used an experimental task that included the same facial images in an emotion recognition condition and an age recognition condition (to control for processes associated with general face recognition) in 27 schizophrenia patients, 16 bipolar I patients, and 30 controls. Schizophrenia and bipolar patients exhibited both shared and distinct aspects of facial emotion recognition deficits. Schizophrenia patients had deficits in recognizing angry facial expressions compared to healthy controls and bipolar patients. Compared to control participants, both schizophrenia and bipolar patients were more likely to mislabel facial expressions of anger as fear. Given that schizophrenia patients exhibited a deficit in emotion recognition for angry faces, which did not appear due to generalized perceptual and cognitive dysfunction, improving recognition of threat-related expression may be an important intervention target to improve social functioning in schizophrenia. PMID:23218816

  13. Non-Cooperative Facial Recognition Video Dataset Collection Plan

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, Marcia L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Erikson, Rebecca L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lombardo, Nicholas J. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2013-08-31

    The Pacific Northwest National Laboratory (PNNL) will produce a non-cooperative (i.e. not posing for the camera) facial recognition video data set for research purposes to evaluate and enhance facial recognition systems technology. The aggregate data set consists of 1) videos capturing PNNL role players and public volunteers in three key operational settings, 2) photographs of the role players for enrolling in an evaluation database, and 3) ground truth data that documents when the role player is within various camera fields of view. PNNL will deliver the aggregate data set to DHS who may then choose to make it available to other government agencies interested in evaluating and enhancing facial recognition systems. The three operational settings that will be the focus of the video collection effort include: 1) unidirectional crowd flow 2) bi-directional crowd flow, and 3) linear and/or serpentine queues.

  14. The recognition of facial emotion expressions in Parkinson's disease.

    Science.gov (United States)

    Assogna, Francesca; Pontieri, Francesco E; Caltagirone, Carlo; Spalletta, Gianfranco

    2008-11-01

    A limited number of studies in Parkinson's Disease (PD) suggest a disturbance of recognition of facial emotion expressions. In particular, disgust recognition impairment has been reported in unmedicated and medicated PD patients. However, the results are rather inconclusive in the definition of the degree and the selectivity of emotion recognition impairment, and an associated impairment of almost all basic facial emotions in PD is also described. Few studies have investigated the relationship with neuropsychiatric and neuropsychological symptoms with mainly negative results. This inconsistency may be due to many different problems, such as emotion assessment, perception deficit, cognitive impairment, behavioral symptoms, illness severity and antiparkinsonian therapy. Here we review the clinical characteristics and neural structures involved in the recognition of specific facial emotion expressions, and the plausible role of dopamine transmission and dopamine replacement therapy in these processes. It is clear that future studies should be directed to clarify all these issues.

  15. Younger and Older Users’ Recognition of Virtual Agent Facial Expressions

    Science.gov (United States)

    Beer, Jenay M.; Smarr, Cory-Ann; Fisk, Arthur D.; Rogers, Wendy A.

    2015-01-01

    As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent’s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell, Sullivan, Prevost, & Churchill, 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck & Reichenbach, 2005; Courgeon et al. 2009; 2011; Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent’s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a

  16. Facial Expression Recognition of Various Internal States via Manifold Learning

    Institute of Scientific and Technical Information of China (English)

    Young-Suk Shin

    2009-01-01

    Emotions are becoming increasingly important in human-centered interaction architectures. Recognition of facial expressions, which are central to human-computer interactions, seems natural and desirable. However, facial expressions include mixed emotions, continuous rather than discrete, which vary from moment to moment. This paper represents a novel method of recognizing facial expressions of various internal states via manifold learning, to achieve the aim of humancentered interaction studies. A critical review of widely used emotion models is described, then, facial expression features of various internal states via the locally linear embedding (LLE) are extracted. The recognition of facial expressions is created with the pleasure-displeasure and arousal-sleep dimensions in a two-dimensional model of emotion. The recognition result of various internal state expressions that mapped to the embedding space via the LLE algorithm can effectively represent the structural nature of the two-dimensional model of emotion. Therefore our research has established that the relationship between facial expressions of various internal states can be elaborated in the two-dimensional model of emotion, via the locally linear embedding algorithm.

  17. Multi-Layer Sparse Representation for Weighted LBP-Patches Based Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Qi Jia

    2015-03-01

    Full Text Available In this paper, a novel facial expression recognition method based on sparse representation is proposed. Most contemporary facial expression recognition systems suffer from limited ability to handle image nuisances such as low resolution and noise. Especially for low intensity expression, most of the existing training methods have quite low recognition rates. Motivated by sparse representation, the problem can be solved by finding sparse coefficients of the test image by the whole training set. Deriving an effective facial representation from original face images is a vital step for successful facial expression recognition. We evaluate facial representation based on weighted local binary patterns, and Fisher separation criterion is used to calculate the weighs of patches. A multi-layer sparse representation framework is proposed for multi-intensity facial expression recognition, especially for low-intensity expressions and noisy expressions in reality, which is a critical problem but seldom addressed in the existing works. To this end, several experiments based on low-resolution and multi-intensity expressions are carried out. Promising results on publicly available databases demonstrate the potential of the proposed approach.

  18. Facial Expression Recognition for Traumatic Brain Injured Patients

    DEFF Research Database (Denmark)

    Ilyas, Chaudhary Muhammad Aqdus; Nasrollahi, Kamal; Moeslund, Thomas B.

    2018-01-01

    In this paper, we investigate the issues associated with facial expression recognition of Traumatic Brain Insured (TBI) patients in a realistic scenario. These patients have restricted or limited muscle movements with reduced facial expressions along with non-cooperative behavior, impaired reason...

  19. Face Processing in Children with Autism Spectrum Disorder: Independent or Interactive Processing of Facial Identity and Facial Expression?

    Science.gov (United States)

    Krebs, Julia F.; Biswas, Ajanta; Pascalis, Olivier; Kamp-Becker, Inge; Remschmidt, Helmuth; Schwarzer, Gudrun

    2011-01-01

    The current study investigated if deficits in processing emotional expression affect facial identity processing and vice versa in children with autism spectrum disorder. Children with autism and IQ and age matched typically developing children classified faces either by emotional expression, thereby ignoring facial identity or by facial identity…

  20. iFER: facial expression recognition using automatically selected geometric eye and eyebrow features

    Science.gov (United States)

    Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz

    2018-03-01

    Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.

  1. Novel dynamic Bayesian networks for facial action element recognition and understanding

    Science.gov (United States)

    Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong

    2011-12-01

    In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.

  2. Dissociating Face Identity and Facial Expression Processing Via Visual Adaptation

    Directory of Open Access Journals (Sweden)

    Hong Xu

    2012-10-01

    Full Text Available Face identity and facial expression are processed in two distinct neural pathways. However, most of the existing face adaptation literature studies them separately, despite the fact that they are two aspects from the same face. The current study conducted a systematic comparison between these two aspects by face adaptation, investigating how top- and bottom-half face parts contribute to the processing of face identity and facial expression. A real face (sad, “Adam” and its two size-equivalent face parts (top- and bottom-half were used as the adaptor in separate conditions. For face identity adaptation, the test stimuli were generated by morphing Adam's sad face with another person's sad face (“Sam”. For facial expression adaptation, the test stimuli were created by morphing Adam's sad face with his neutral face and morphing the neutral face with his happy face. In each trial, after exposure to the adaptor, observers indicated the perceived face identity or facial expression of the following test face via a key press. They were also tested in a baseline condition without adaptation. Results show that the top- and bottom-half face each generated a significant face identity aftereffect. However, the aftereffect by top-half face adaptation is much larger than that by the bottom-half face. On the contrary, only the bottom-half face generated a significant facial expression aftereffect. This dissociation of top- and bottom-half face adaptation suggests that face parts play different roles in face identity and facial expression. It thus provides further evidence for the distributed systems of face perception.

  3. Intelligent Facial Recognition Systems: Technology advancements for security applications

    Energy Technology Data Exchange (ETDEWEB)

    Beer, C.L.

    1993-07-01

    Insider problems such as theft and sabotage can occur within the security and surveillance realm of operations when unauthorized people obtain access to sensitive areas. A possible solution to these problems is a means to identify individuals (not just credentials or badges) in a given sensitive area and provide full time personnel accountability. One approach desirable at Department of Energy facilities for access control and/or personnel identification is an Intelligent Facial Recognition System (IFRS) that is non-invasive to personnel. Automatic facial recognition does not require the active participation of the enrolled subjects, unlike most other biological measurement (biometric) systems (e.g., fingerprint, hand geometry, or eye retinal scan systems). It is this feature that makes an IFRS attractive for applications other than access control such as emergency evacuation verification, screening, and personnel tracking. This paper discusses current technology that shows promising results for DOE and other security applications. A survey of research and development in facial recognition identified several companies and universities that were interested and/or involved in the area. A few advanced prototype systems were also identified. Sandia National Laboratories is currently evaluating facial recognition systems that are in the advanced prototype stage. The initial application for the evaluation is access control in a controlled environment with a constant background and with cooperative subjects. Further evaluations will be conducted in a less controlled environment, which may include a cluttered background and subjects that are not looking towards the camera. The outcome of the evaluations will help identify areas of facial recognition systems that need further development and will help to determine the effectiveness of the current systems for security applications.

  4. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    Science.gov (United States)

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  5. Facial Expression Recognition Teaching to Preschoolers with Autism

    DEFF Research Database (Denmark)

    Christinaki, Eirini; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2013-01-01

    The recognition of facial expressions is important for the perception of emotions. Understanding emotions is essential in human communication and social interaction. Children with autism have been reported to exhibit deficits in the recognition of affective expressions. Their difficulties...

  6. Mapping correspondence between facial mimicry and emotion recognition in healthy subjects.

    Science.gov (United States)

    Ponari, Marta; Conson, Massimiliano; D'Amico, Nunzia Pina; Grossi, Dario; Trojano, Luigi

    2012-12-01

    We aimed at verifying the hypothesis that facial mimicry is causally and selectively involved in emotion recognition. For this purpose, in Experiment 1, we explored the effect of tonic contraction of muscles in upper or lower half of participants' face on their ability to recognize emotional facial expressions. We found that the "lower" manipulation specifically impaired recognition of happiness and disgust, the "upper" manipulation impaired recognition of anger, while both manipulations affected recognition of fear; recognition of surprise and sadness were not affected by either blocking manipulations. In Experiment 2, we verified whether emotion recognition is hampered by stimuli in which an upper or lower half-face showing an emotional expression is combined with a neutral half-face. We found that the neutral lower half-face interfered with recognition of happiness and disgust, whereas the neutral upper half impaired recognition of anger; recognition of fear and sadness was impaired by both manipulations, whereas recognition of surprise was not affected by either manipulation. Taken together, the present findings support simulation models of emotion recognition and provide insight into the role of mimicry in comprehension of others' emotional facial expressions. PsycINFO Database Record (c) 2012 APA, all rights reserved.

  7. Facial expression recognition based on improved local ternary pattern and stacked auto-encoder

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.

  8. Anodal tDCS targeting the right orbitofrontal cortex enhances facial expression recognition

    Science.gov (United States)

    Murphy, Jillian M.; Ridley, Nicole J.; Vercammen, Ans

    2015-01-01

    The orbitofrontal cortex (OFC) has been implicated in the capacity to accurately recognise facial expressions. The aim of the current study was to determine if anodal transcranial direct current stimulation (tDCS) targeting the right OFC in healthy adults would enhance facial expression recognition, compared with a sham condition. Across two counterbalanced sessions of tDCS (i.e. anodal and sham), 20 undergraduate participants (18 female) completed a facial expression labelling task comprising angry, disgusted, fearful, happy, sad and neutral expressions, and a control (social judgement) task comprising the same expressions. Responses on the labelling task were scored for accuracy, median reaction time and overall efficiency (i.e. combined accuracy and reaction time). Anodal tDCS targeting the right OFC enhanced facial expression recognition, reflected in greater efficiency and speed of recognition across emotions, relative to the sham condition. In contrast, there was no effect of tDCS to responses on the control task. This is the first study to demonstrate that anodal tDCS targeting the right OFC boosts facial expression recognition. This finding provides a solid foundation for future research to examine the efficacy of this technique as a means to treat facial expression recognition deficits, particularly in individuals with OFC damage or dysfunction. PMID:25971602

  9. Facial emotional recognition in schizophrenia: preliminary results of the virtual reality program for facial emotional recognition

    Directory of Open Access Journals (Sweden)

    Teresa Souto

    2013-01-01

    Full Text Available BACKGROUND: Significant deficits in emotional recognition and social perception characterize patients with schizophrenia and have direct negative impact both in inter-personal relationships and in social functioning. Virtual reality, as a methodological resource, might have a high potential for assessment and training skills in people suffering from mental illness. OBJECTIVES: To present preliminary results of a facial emotional recognition assessment designed for patients with schizophrenia, using 3D avatars and virtual reality. METHODS: Presentation of 3D avatars which reproduce images developed with the FaceGen® software and integrated in a three-dimensional virtual environment. Each avatar was presented to a group of 12 patients with schizophrenia and a reference group of 12 subjects without psychiatric pathology. RESULTS: The results show that the facial emotions of happiness and anger are better recognized by both groups and that the major difficulties arise in fear and disgust recognition. Frontal alpha electroencephalography variations were found during the presentation of anger and disgust stimuli among patients with schizophrenia. DISCUSSION: The developed program evaluation module can be of surplus value both for patient and therapist, providing the task execution in a non anxiogenic environment, however similar to the actual experience.

  10. Impaired recognition of facial emotions from low-spatial frequencies in Asperger syndrome.

    Science.gov (United States)

    Kätsyri, Jari; Saalasti, Satu; Tiippana, Kaisa; von Wendt, Lennart; Sams, Mikko

    2008-01-01

    The theory of 'weak central coherence' [Happe, F., & Frith, U. (2006). The weak coherence account: Detail-focused cognitive style in autism spectrum disorders. Journal of Autism and Developmental Disorders, 36(1), 5-25] implies that persons with autism spectrum disorders (ASDs) have a perceptual bias for local but not for global stimulus features. The recognition of emotional facial expressions representing various different levels of detail has not been studied previously in ASDs. We analyzed the recognition of four basic emotional facial expressions (anger, disgust, fear and happiness) from low-spatial frequencies (overall global shapes without local features) in adults with an ASD. A group of 20 participants with Asperger syndrome (AS) was compared to a group of non-autistic age- and sex-matched controls. Emotion recognition was tested from static and dynamic facial expressions whose spatial frequency contents had been manipulated by low-pass filtering at two levels. The two groups recognized emotions similarly from non-filtered faces and from dynamic vs. static facial expressions. In contrast, the participants with AS were less accurate than controls in recognizing facial emotions from very low-spatial frequencies. The results suggest intact recognition of basic facial emotions and dynamic facial information, but impaired visual processing of global features in ASDs.

  11. Relative preservation of the recognition of positive facial expression "happiness" in Alzheimer disease.

    Science.gov (United States)

    Maki, Yohko; Yoshida, Hiroshi; Yamaguchi, Tomoharu; Yamaguchi, Haruyasu

    2013-01-01

    Positivity recognition bias has been reported for facial expression as well as memory and visual stimuli in aged individuals, whereas emotional facial recognition in Alzheimer disease (AD) patients is controversial, with possible involvement of confounding factors such as deficits in spatial processing of non-emotional facial features and in verbal processing to express emotions. Thus, we examined whether recognition of positive facial expressions was preserved in AD patients, by adapting a new method that eliminated the influences of these confounding factors. Sensitivity of six basic facial expressions (happiness, sadness, surprise, anger, disgust, and fear) was evaluated in 12 outpatients with mild AD, 17 aged normal controls (ANC), and 25 young normal controls (YNC). To eliminate the factors related to non-emotional facial features, averaged faces were prepared as stimuli. To eliminate the factors related to verbal processing, the participants were required to match the images of stimulus and answer, avoiding the use of verbal labels. In recognition of happiness, there was no difference in sensitivity between YNC and ANC, and between ANC and AD patients. AD patients were less sensitive than ANC in recognition of sadness, surprise, and anger. ANC were less sensitive than YNC in recognition of surprise, anger, and disgust. Within the AD patient group, sensitivity of happiness was significantly higher than those of the other five expressions. In AD patient, recognition of happiness was relatively preserved; recognition of happiness was most sensitive and was preserved against the influences of age and disease.

  12. Development of Facial Emotion Recognition in Childhood : Age-related Differences in a Shortened Version of the Facial Expressions of Emotion - Stimuli and Tests

    NARCIS (Netherlands)

    Coenen, Maraike; Aarnoudse, Ceciel; Huitema, Rients; Braams, Olga; Veenstra, Wencke S.

    2013-01-01

    Introduction Facial emotion recognition is essential for social interaction. The development of emotion recognition abilities is not yet entirely understood (Tonks et al. 2007). Facial emotion recognition emerges gradually, with happiness recognized earliest (Herba & Phillips, 2004). The recognition

  13. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    Science.gov (United States)

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  14. Temporal lobe structures and facial emotion recognition in schizophrenia patients and nonpsychotic relatives.

    Science.gov (United States)

    Goghari, Vina M; Macdonald, Angus W; Sponheim, Scott R

    2011-11-01

    Temporal lobe abnormalities and emotion recognition deficits are prominent features of schizophrenia and appear related to the diathesis of the disorder. This study investigated whether temporal lobe structural abnormalities were associated with facial emotion recognition deficits in schizophrenia and related to genetic liability for the disorder. Twenty-seven schizophrenia patients, 23 biological family members, and 36 controls participated. Several temporal lobe regions (fusiform, superior temporal, middle temporal, amygdala, and hippocampus) previously associated with face recognition in normative samples and found to be abnormal in schizophrenia were evaluated using volumetric analyses. Participants completed a facial emotion recognition task and an age recognition control task under time-limited and self-paced conditions. Temporal lobe volumes were tested for associations with task performance. Group status explained 23% of the variance in temporal lobe volume. Left fusiform gray matter volume was decreased by 11% in patients and 7% in relatives compared with controls. Schizophrenia patients additionally exhibited smaller hippocampal and middle temporal volumes. Patients were unable to improve facial emotion recognition performance with unlimited time to make a judgment but were able to improve age recognition performance. Patients additionally showed a relationship between reduced temporal lobe gray matter and poor facial emotion recognition. For the middle temporal lobe region, the relationship between greater volume and better task performance was specific to facial emotion recognition and not age recognition. Because schizophrenia patients exhibited a specific deficit in emotion recognition not attributable to a generalized impairment in face perception, impaired emotion recognition may serve as a target for interventions.

  15. Sex differences in facial emotion recognition across varying expression intensity levels from videos.

    Science.gov (United States)

    Wingenbach, Tanja S H; Ashwin, Chris; Brosnan, Mark

    2018-01-01

    There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or 'extreme' examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations.

  16. Sex differences in facial emotion recognition across varying expression intensity levels from videos

    Science.gov (United States)

    2018-01-01

    There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or ‘extreme’ examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations. PMID:29293674

  17. Sex differences in facial emotion recognition across varying expression intensity levels from videos.

    Directory of Open Access Journals (Sweden)

    Tanja S H Wingenbach

    Full Text Available There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or 'extreme' examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates and response latencies for emotion recognition using short video stimuli (1sec of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral across three variations in the intensity of the emotional expression (low, intermediate, high in an adolescent and adult sample (N = 111; 51 male, 60 female aged between 16 and 45 (M = 22.2, SD = 5.7. Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations.

  18. Recognition of computerized facial approximations by familiar assessors.

    Science.gov (United States)

    Richard, Adam H; Monson, Keith L

    2017-11-01

    Studies testing the effectiveness of facial approximations typically involve groups of participants who are unfamiliar with the approximated individual(s). This limitation requires the use of photograph arrays including a picture of the subject for comparison to the facial approximation. While this practice is often necessary due to the difficulty in obtaining a group of assessors who are familiar with the approximated subject, it may not accurately simulate the thought process of the target audience (friends and family members) in comparing a mental image of the approximated subject to the facial approximation. As part of a larger process to evaluate the effectiveness and best implementation of the ReFace facial approximation software program, the rare opportunity arose to conduct a recognition study using assessors who were personally acquainted with the subjects of the approximations. ReFace facial approximations were generated based on preexisting medical scans, and co-workers of the scan donors were tested on whether they could accurately pick out the approximation of their colleague from arrays of facial approximations. Results from the study demonstrated an overall poor recognition performance (i.e., where a single choice within a pool is not enforced) for individuals who were familiar with the approximated subjects. Out of 220 recognition tests only 10.5% resulted in the assessor selecting the correct approximation (or correctly choosing not to make a selection when the array consisted only of foils), an outcome that was not significantly different from the 9% random chance rate. When allowed to select multiple approximations the assessors felt resembled the target individual, the overall sensitivity for ReFace approximations was 16.0% and the overall specificity was 81.8%. These results differ markedly from the results of a previous study using assessors who were unfamiliar with the approximated subjects. Some possible explanations for this disparity in

  19. Event-related theta synchronization predicts deficit in facial affect recognition in schizophrenia.

    Science.gov (United States)

    Csukly, Gábor; Stefanics, Gábor; Komlósi, Sarolta; Czigler, István; Czobor, Pál

    2014-02-01

    Growing evidence suggests that abnormalities in the synchronized oscillatory activity of neurons in schizophrenia may lead to impaired neural activation and temporal coding and thus lead to neurocognitive dysfunctions, such as deficits in facial affect recognition. To gain an insight into the neurobiological processes linked to facial affect recognition, we investigated both induced and evoked oscillatory activity by calculating the Event Related Spectral Perturbation (ERSP) and the Inter Trial Coherence (ITC) during facial affect recognition. Fearful and neutral faces as well as nonface patches were presented to 24 patients with schizophrenia and 24 matched healthy controls while EEG was recorded. The participants' task was to recognize facial expressions. Because previous findings with healthy controls showed that facial feature decoding was associated primarily with oscillatory activity in the theta band, we analyzed ERSP and ITC in this frequency band in the time interval of 140-200 ms, which corresponds to the N170 component. Event-related theta activity and phase-locking to facial expressions, but not to nonface patches, predicted emotion recognition performance in both controls and patients. Event-related changes in theta amplitude and phase-locking were found to be significantly weaker in patients compared with healthy controls, which is in line with previous investigations showing decreased neural synchronization in the low frequency bands in patients with schizophrenia. Neural synchrony is thought to underlie distributed information processing. Our results indicate a less effective functioning in the recognition process of facial features, which may contribute to a less effective social cognition in schizophrenia. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  20. Violent Media Consumption and the Recognition of Dynamic Facial Expressions

    Science.gov (United States)

    Kirsh, Steven J.; Mounts, Jeffrey R. W.; Olczak, Paul V.

    2006-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph.…

  1. The Differential Effects of Thalamus and Basal Ganglia on Facial Emotion Recognition

    Science.gov (United States)

    Cheung, Crystal C. Y.; Lee, Tatia M. C.; Yip, James T. H.; King, Kristin E.; Li, Leonard S. W.

    2006-01-01

    This study examined if subcortical stroke was associated with impaired facial emotion recognition. Furthermore, the lateralization of the impairment and the differential profiles of facial emotion recognition deficits with localized thalamic or basal ganglia damage were also studied. Thirty-eight patients with subcortical strokes and 19 matched…

  2. Age, gender and puberty influence the development of facial emotion recognition

    Directory of Open Access Journals (Sweden)

    Kate eLawrence

    2015-06-01

    Full Text Available Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children’s ability to recognise simple emotions is modulated by chronological age, pubertal stage and gender. In order to establish norms, we assessed 478 children aged 6-16 years, using the Ekman-Friesen Pictures of Facial Affect. We then modelled these cross-sectional data in terms of competence in accurate recognition of the six emotions studied, when the positive correlation between emotion recognition and IQ was controlled. Significant linear trends were seen in children’s ability to recognise facial expressions of happiness, surprise, fear and disgust; there was improvement with increasing age. In contrast, for sad and angry expressions there is little or no change in accuracy over the age range 6-16 years; near-adult levels of competence are established by middle-childhood. In a sampled subset, pubertal status influenced the ability to recognize facial expressions of disgust and anger; there was an increase in competence from mid to late puberty, which occurred independently of age. A small female advantage was found in the recognition of some facial expressions. The normative data provided in this study will aid clinicians and researchers in assessing the emotion recognition abilities of children and will facilitate the identification of abnormalities in a skill that is often impaired in neurodevelopmental disorders. If emotion recognition abilities are a good model with which to understand adolescent development, then these results could have implications for the education, mental health provision and legal treatment of teenagers.

  3. Age, gender, and puberty influence the development of facial emotion recognition.

    Science.gov (United States)

    Lawrence, Kate; Campbell, Ruth; Skuse, David

    2015-01-01

    Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children's ability to recognize simple emotions is modulated by chronological age, pubertal stage and gender. In order to establish norms, we assessed 478 children aged 6-16 years, using the Ekman-Friesen Pictures of Facial Affect. We then modeled these cross-sectional data in terms of competence in accurate recognition of the six emotions studied, when the positive correlation between emotion recognition and IQ was controlled. Significant linear trends were seen in children's ability to recognize facial expressions of happiness, surprise, fear, and disgust; there was improvement with increasing age. In contrast, for sad and angry expressions there is little or no change in accuracy over the age range 6-16 years; near-adult levels of competence are established by middle-childhood. In a sampled subset, pubertal status influenced the ability to recognize facial expressions of disgust and anger; there was an increase in competence from mid to late puberty, which occurred independently of age. A small female advantage was found in the recognition of some facial expressions. The normative data provided in this study will aid clinicians and researchers in assessing the emotion recognition abilities of children and will facilitate the identification of abnormalities in a skill that is often impaired in neurodevelopmental disorders. If emotion recognition abilities are a good model with which to understand adolescent development, then these results could have implications for the education, mental health provision and legal treatment of teenagers.

  4. Age, gender, and puberty influence the development of facial emotion recognition

    Science.gov (United States)

    Lawrence, Kate; Campbell, Ruth; Skuse, David

    2015-01-01

    Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children’s ability to recognize simple emotions is modulated by chronological age, pubertal stage and gender. In order to establish norms, we assessed 478 children aged 6–16 years, using the Ekman-Friesen Pictures of Facial Affect. We then modeled these cross-sectional data in terms of competence in accurate recognition of the six emotions studied, when the positive correlation between emotion recognition and IQ was controlled. Significant linear trends were seen in children’s ability to recognize facial expressions of happiness, surprise, fear, and disgust; there was improvement with increasing age. In contrast, for sad and angry expressions there is little or no change in accuracy over the age range 6–16 years; near-adult levels of competence are established by middle-childhood. In a sampled subset, pubertal status influenced the ability to recognize facial expressions of disgust and anger; there was an increase in competence from mid to late puberty, which occurred independently of age. A small female advantage was found in the recognition of some facial expressions. The normative data provided in this study will aid clinicians and researchers in assessing the emotion recognition abilities of children and will facilitate the identification of abnormalities in a skill that is often impaired in neurodevelopmental disorders. If emotion recognition abilities are a good model with which to understand adolescent development, then these results could have implications for the education, mental health provision and legal treatment of teenagers. PMID:26136697

  5. Altered Kinematics of Facial Emotion Expression and Emotion Recognition Deficits Are Unrelated in Parkinson's Disease.

    Science.gov (United States)

    Bologna, Matteo; Berardelli, Isabella; Paparella, Giulia; Marsili, Luca; Ricciardi, Lucia; Fabbrini, Giovanni; Berardelli, Alfredo

    2016-01-01

    Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson's disease (PD). However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Eighteen patients with PD and 16 healthy controls were enrolled in this study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analyzed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients were evaluated using the Spearman's test and multiple regression analysis. The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all P s facial expression kinematics and emotion recognition deficits were unrelated in patients (all P s > 0.05). Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients (all P s > 0.05). The results in this study provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.

  6. Face to face: blocking facial mimicry can selectively impair recognition of emotional expressions.

    Science.gov (United States)

    Oberman, Lindsay M; Winkielman, Piotr; Ramachandran, Vilayanur S

    2007-01-01

    People spontaneously mimic a variety of behaviors, including emotional facial expressions. Embodied cognition theories suggest that mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. If so, blocking facial mimicry should impair recognition of expressions, especially of emotions that are simulated using facial musculature. The current research tested this hypothesis using four expressions (happy, disgust, fear, and sad) and two mimicry-interfering manipulations (1) biting on a pen and (2) chewing gum, as well as two control conditions. Experiment 1 used electromyography over cheek, mouth, and nose regions. The bite manipulation consistently activated assessed muscles, whereas the chew manipulation activated muscles only intermittently. Further, expressing happiness generated most facial action. Experiment 2 found that the bite manipulation interfered most with recognition of happiness. These findings suggest that facial mimicry differentially contributes to recognition of specific facial expressions, thus allowing for more refined predictions from embodied cognition theories.

  7. Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems

    Directory of Open Access Journals (Sweden)

    Muhammad Hameed Siddiqi

    2013-12-01

    Full Text Available Over the last decade, human facial expressions recognition (FER has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER.

  8. Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems

    Science.gov (United States)

    Siddiqi, Muhammad Hameed; Lee, Sungyoung; Lee, Young-Koo; Khan, Adil Mehmood; Truc, Phan Tran Ho

    2013-01-01

    Over the last decade, human facial expressions recognition (FER) has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER) system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER. PMID:24316568

  9. [Impact of facial emotional recognition alterations in Dementia of the Alzheimer type].

    Science.gov (United States)

    Rubinstein, Wanda; Cossini, Florencia; Politis, Daniel

    2016-07-01

    Face recognition of basic emotions is independent of other deficits in dementia of the Alzheimer type. Among these deficits, there is disagreement about what emotions are more difficult to recognize. Our aim was to study the presence of alterations in the process of facial recognition of basic emotions, and to investigate if there were differences in the recognition of each type of emotion in Alzheimer's disease. With three tests of recognition of basic facial emotions we evaluated 29 patients who had been diagnosed with dementia of the Alzheimer type and 18 control subjects. Significant differences were obtained in tests of recognition of basic facial emotions and between each. Since the amygdala, one of the brain structures responsible for emotional reaction, is affected in the early stages of this disease, our findings become relevant to understand how this alteration of the process of emotional recognition impacts the difficulties these patients have in both interpersonal relations and behavioral disorders.

  10. Facial Emotion and Identity Processing Development in 5- to 15-Year-Old Children

    Directory of Open Access Journals (Sweden)

    Patrick eJohnston

    2011-02-01

    Full Text Available Most developmental studies of emotional face processing to date have focussed on infants and very young children. Additionally, studies that examine emotional face processing in older children do not distinguish development in emotion and identity face processing from more generic age-related cognitive improvement. In this study, we developed a paradigm that measures processing of facial expression in comparison to facial identity and complex visual stimuli. The three matching tasks were developed (i.e., facial emotion matching, facial identity matching and butterfly wing matching to include stimuli of similar level of discriminability and to be equated for task difficulty in earlier samples of young adults. Ninety two children aged 5 to 15 years and a new group of 24 young adults completed these three matching tasks. Young children were highly adept at the butterfly wing task relative to their performance on both face-related tasks. More importantly, in older children, development of facial emotion discrimination ability lagged behind that of facial identity discrimination.

  11. Impaired Facial Expression Recognition in Children with Temporal Lobe Epilepsy: Impact of Early Seizure Onset on Fear Recognition

    Science.gov (United States)

    Golouboff, Nathalie; Fiori, Nicole; Delalande, Olivier; Fohlen, Martine; Dellatolas, Georges; Jambaque, Isabelle

    2008-01-01

    The amygdala has been implicated in the recognition of facial emotions, especially fearful expressions, in adults with early-onset right temporal lobe epilepsy (TLE). The present study investigates the recognition of facial emotions in children and adolescents, 8-16 years old, with epilepsy. Twenty-nine subjects had TLE (13 right, 16 left) and…

  12. Facial Expression Recognition Deficits and Faulty Learning: Implications for Theoretical Models and Clinical Applications

    Science.gov (United States)

    Sheaffer, Beverly L.; Golden, Jeannie A.; Averett, Paige

    2009-01-01

    The ability to recognize facial expressions of emotion is integral in social interaction. Although the importance of facial expression recognition is reflected in increased research interest as well as in popular culture, clinicians may know little about this topic. The purpose of this article is to discuss facial expression recognition literature…

  13. Altered Kinematics of Facial Emotion Expression and Emotion Recognition Deficits Are Unrelated in Parkinson?s Disease

    OpenAIRE

    Bologna, Matteo; Berardelli, Isabella; Paparella, Giulia; Marsili, Luca; Ricciardi, Lucia; Fabbrini, Giovanni; Berardelli, Alfredo

    2016-01-01

    Background Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson?s disease (PD). However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. Objective To investigate possible deficits in facial emotion expression and emotion recognition and their...

  14. On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

    Science.gov (United States)

    Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.

    Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

  15. Expression intensity, gender and facial emotion recognition: Women recognize only subtle facial emotions better than men.

    Science.gov (United States)

    Hoffmann, Holger; Kessler, Henrik; Eppel, Tobias; Rukavina, Stefanie; Traue, Harald C

    2010-11-01

    Two experiments were conducted in order to investigate the effect of expression intensity on gender differences in the recognition of facial emotions. The first experiment compared recognition accuracy between female and male participants when emotional faces were shown with full-blown (100% emotional content) or subtle expressiveness (50%). In a second experiment more finely grained analyses were applied in order to measure recognition accuracy as a function of expression intensity (40%-100%). The results show that although women were more accurate than men in recognizing subtle facial displays of emotion, there was no difference between male and female participants when recognizing highly expressive stimuli. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. Facial Expression Recognition By Using Fisherface Methode With Backpropagation Neural Network

    Directory of Open Access Journals (Sweden)

    Zaenal Abidin

    2011-01-01

    Full Text Available Abstract— In daily lives, especially in interpersonal communication, face often used for expression. Facial expressions give information about the emotional state of the person. A facial expression is one of the behavioral characteristics. The components of a basic facial expression analysis system are face detection, face data extraction, and facial expression recognition. Fisherface method with backpropagation artificial neural network approach can be used for facial expression recognition. This method consists of two-stage process, namely PCA and LDA. PCA is used to reduce the dimension, while the LDA is used for features extraction of facial expressions. The system was tested with 2 databases namely JAFFE database and MUG database. The system correctly classified the expression with accuracy of 86.85%, and false positive 25 for image type I of JAFFE, for image type II of JAFFE 89.20% and false positive 15,  for type III of JAFFE 87.79%, and false positive for 16. The image of MUG are 98.09%, and false positive 5. Keywords— facial expression, fisherface method, PCA, LDA, backpropagation neural network.

  17. Facial expressions recognition with an emotion expressive robotic head

    Science.gov (United States)

    Doroftei, I.; Adascalitei, F.; Lefeber, D.; Vanderborght, B.; Doroftei, I. A.

    2016-08-01

    The purpose of this study is to present the preliminary steps in facial expressions recognition with a new version of an expressive social robotic head. So, in a first phase, our main goal was to reach a minimum level of emotional expressiveness in order to obtain nonverbal communication between the robot and human by building six basic facial expressions. To evaluate the facial expressions, the robot was used in some preliminary user studies, among children and adults.

  18. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    Science.gov (United States)

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  19. Deficits in recognition, identification, and discrimination of facial emotions in patients with bipolar disorder.

    Science.gov (United States)

    Benito, Adolfo; Lahera, Guillermo; Herrera, Sara; Muncharaz, Ramón; Benito, Guillermo; Fernández-Liria, Alberto; Montes, José Manuel

    2013-01-01

    To analyze the recognition, identification, and discrimination of facial emotions in a sample of outpatients with bipolar disorder (BD). Forty-four outpatients with diagnosis of BD and 48 matched control subjects were selected. Both groups were assessed with tests for recognition (Emotion Recognition-40 - ER40), identification (Facial Emotion Identification Test - FEIT), and discrimination (Facial Emotion Discrimination Test - FEDT) of facial emotions, as well as a theory of mind (ToM) verbal test (Hinting Task). Differences between groups were analyzed, controlling the influence of mild depressive and manic symptoms. Patients with BD scored significantly lower than controls on recognition (ER40), identification (FEIT), and discrimination (FEDT) of emotions. Regarding the verbal measure of ToM, a lower score was also observed in patients compared to controls. Patients with mild syndromal depressive symptoms obtained outcomes similar to patients in euthymia. A significant correlation between FEDT scores and global functioning (measured by the Functioning Assessment Short Test, FAST) was found. These results suggest that, even in euthymia, patients with BD experience deficits in recognition, identification, and discrimination of facial emotions, with potential functional implications.

  20. Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.

    Science.gov (United States)

    Nummenmaa, Lauri; Calvo, Manuel G

    2015-04-01

    Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).

  1. Facial emotion recognition in Chinese with schizophrenia at early and chronic stages of illness.

    Science.gov (United States)

    Leung, Joey Shuk-Yan; Lee, Tatia M C; Lee, Chi-Chiu

    2011-12-30

    Deficits in facial emotion recognition have been recognised in Chinese patients diagnosed with schizophrenia. This study examined the relationship between chronicity of illness and performance of facial emotion recognition in Chinese with schizophrenia. There were altogether four groups of subjects matched for age and gender composition. The first and second groups comprised medically stable outpatients with first-episode schizophrenia (n=50) and their healthy controls (n=26). The third and fourth groups were patients with chronic schizophrenic illness (n=51) and their controls (n=28). The ability to recognise the six prototypical facial emotions was examined using locally validated coloured photographs from the Japanese and Caucasian Facial Expressions of Emotion. Chinese patients with schizophrenia, in both the first-episode and chronic stages, performed significantly worse than their control counterparts on overall facial emotion recognition, (Pemotion did not appear to have worsened over the course of disease progression, suggesting that recognition of facial emotion is a rather stable trait of the illness. The emotion-specific deficit may have implications for understanding the social difficulties in schizophrenia. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    Science.gov (United States)

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Dissociation between facial and bodily expressions in emotion recognition: A case study.

    Science.gov (United States)

    Leiva, Samanta; Margulis, Laura; Micciulli, Andrea; Ferreres, Aldo

    2017-12-21

    Existing single-case studies have reported deficit in recognizing basic emotions through facial expression and unaffected performance with body expressions, but not the opposite pattern. The aim of this paper is to present a case study with impaired emotion recognition through body expressions and intact performance with facial expressions. In this single-case study we assessed a 30-year-old patient with autism spectrum disorder, without intellectual disability, and a healthy control group (n = 30) with four tasks of basic and complex emotion recognition through face and body movements, and two non-emotional control tasks. To analyze the dissociation between facial and body expressions, we used Crawford and Garthwaite's operational criteria, and we compared the patient and the control group performance with a modified one-tailed t-test designed specifically for single-case studies. There were no statistically significant differences between the patient's and the control group's performances on the non-emotional body movement task or the facial perception task. For both kinds of emotions (basic and complex) when the patient's performance was compared to the control group's, statistically significant differences were only observed for the recognition of body expressions. There were no significant differences between the patient's and the control group's correct answers for emotional facial stimuli. Our results showed a profile of impaired emotion recognition through body expressions and intact performance with facial expressions. This is the first case study that describes the existence of this kind of dissociation pattern between facial and body expressions of basic and complex emotions.

  4. Oxytocin Promotes Facial Emotion Recognition and Amygdala Reactivity in Adults with Asperger Syndrome

    Science.gov (United States)

    Domes, Gregor; Kumbier, Ekkehardt; Heinrichs, Markus; Herpertz, Sabine C

    2014-01-01

    The neuropeptide oxytocin has recently been shown to enhance eye gaze and emotion recognition in healthy men. Here, we report a randomized double-blind, placebo-controlled trial that examined the neural and behavioral effects of a single dose of intranasal oxytocin on emotion recognition in individuals with Asperger syndrome (AS), a clinical condition characterized by impaired eye gaze and facial emotion recognition. Using functional magnetic resonance imaging, we examined whether oxytocin would enhance emotion recognition from facial sections of the eye vs the mouth region and modulate regional activity in brain areas associated with face perception in both adults with AS, and a neurotypical control group. Intranasal administration of the neuropeptide oxytocin improved performance in a facial emotion recognition task in individuals with AS. This was linked to increased left amygdala reactivity in response to facial stimuli and increased activity in the neural network involved in social cognition. Our data suggest that the amygdala, together with functionally associated cortical areas mediate the positive effect of oxytocin on social cognitive functioning in AS. PMID:24067301

  5. Identity Restored: Nesmin's Forensic Facial Reconstruction in Context

    Directory of Open Access Journals (Sweden)

    Branislav Anđelković

    2016-03-01

    Full Text Available A wide range of archaeological human remains stay, for the most part, anonymous and are consequently treated as objects of analysis; not as dead people. With the growing availability of medical imaging and rapidly developing computer technology, 3D digital facial reconstruction, as a noninvasive form of study, offers a successful method of recreating faces from mummified human remains. Forensic facial reconstruction has been utilized for various purposes in scientific investigation, including restoring the physical appearance of the people of ancient civilizations which is an important aspect of their individual identity. Restoring the identity of the Belgrade mummy started in 1991. Along with the absolute dating, gender, age, name, rank and provenance, we also established his genealogy. The owner of Cairo stela 22053 discovered at Akhmim in 1885, and the Belgrade coffin purchased in Luxor in 1888, in which the mummy rests, have been identified as the very same person. Forensic facial reconstruction was used to reproduce, with the highest possible degree of accuracy, the facial appearance of the mummy Nesmin, ca. 300 B.C., a priest from Akhmim, when he was alive.

  6. Common impairments of emotional facial expression recognition in schizophrenia across French and Japanese cultures

    Directory of Open Access Journals (Sweden)

    Takashi eOkada

    2015-07-01

    Full Text Available To address whether the recognition of emotional facial expressions is impaired in schizophrenia across different cultures, patients with schizophrenia and age-matched normal controls in France and Japan were tested with a labeling task of emotional facial expressions and a matching task of unfamiliar faces. Schizophrenia patients in both France and Japan were less accurate in labeling fearful facial expressions. There was no correlation between the scores of facial emotion labeling and face matching. These results suggest that the impaired recognition of emotional facial expressions in schizophrenia is common across different cultures.

  7. Facial Emotion Recognition in Child Psychiatry: A Systematic Review

    Science.gov (United States)

    Collin, Lisa; Bindra, Jasmeet; Raju, Monika; Gillberg, Christopher; Minnis, Helen

    2013-01-01

    This review focuses on facial affect (emotion) recognition in children and adolescents with psychiatric disorders other than autism. A systematic search, using PRISMA guidelines, was conducted to identify original articles published prior to October 2011 pertaining to face recognition tasks in case-control studies. Used in the qualitative…

  8. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    Science.gov (United States)

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Recognition of emotional facial expressions in adolescents with anorexia nervosa and adolescents with major depression.

    Science.gov (United States)

    Sfärlea, Anca; Greimel, Ellen; Platt, Belinda; Dieler, Alica C; Schulte-Körne, Gerd

    2018-04-01

    Anorexia nervosa (AN) has been suggested to be associated with abnormalities in facial emotion recognition. Most prior studies on facial emotion recognition in AN have investigated adult samples, despite the onset of AN being particularly often during adolescence. In addition, few studies have examined whether impairments in facial emotion recognition are specific to AN or might be explained by frequent comorbid conditions that are also associated with deficits in emotion recognition, such as depression. The present study addressed these gaps by investigating recognition of emotional facial expressions in adolescent girls with AN (n = 26) compared to girls with major depression (MD; n = 26) and healthy girls (HC; n = 37). Participants completed one task requiring identification of emotions (happy, sad, afraid, angry, neutral) in faces and two control tasks. Neither of the clinical groups showed impairments. The AN group was more accurate than the HC group in recognising afraid facial expressions and more accurate than the MD group in recognising happy, sad, and afraid expressions. Misclassification analyses identified subtle group differences in the types of errors made. The results suggest that the deficits in facial emotion recognition found in adult AN samples are not present in adolescent patients. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Instructions to mimic improve facial emotion recognition in people with sub-clinical autism traits.

    Science.gov (United States)

    Lewis, Michael B; Dunn, Emily

    2017-11-01

    People tend to mimic the facial expression of others. It has been suggested that this helps provide social glue between affiliated people but it could also aid recognition of emotions through embodied cognition. The degree of facial mimicry, however, varies between individuals and is limited in people with autism spectrum conditions (ASC). The present study sought to investigate the effect of promoting facial mimicry during a facial-emotion-recognition test. In two experiments, participants without an ASC diagnosis had their autism quotient (AQ) measured. Following a baseline test, they did an emotion-recognition test again but half of the participants were asked to mimic the target face they saw prior to making their responses. Mimicry improved emotion recognition, and further analysis revealed that the largest improvement was for participants who had higher scores on the autism traits. In fact, recognition performance was best overall for people who had high AQ scores but also received the instruction to mimic. Implications for people with ASC are explored.

  11. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    Science.gov (United States)

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  12. Oxytocin improves facial emotion recognition in young adults with antisocial personality disorder.

    Science.gov (United States)

    Timmermann, Marion; Jeung, Haang; Schmitt, Ruth; Boll, Sabrina; Freitag, Christine M; Bertsch, Katja; Herpertz, Sabine C

    2017-11-01

    Deficient facial emotion recognition has been suggested to underlie aggression in individuals with antisocial personality disorder (ASPD). As the neuropeptide oxytocin (OT) has been shown to improve facial emotion recognition, it might also exert beneficial effects in individuals providing so much harm to the society. In a double-blind, randomized, placebo-controlled crossover trial, 22 individuals with ASPD and 29 healthy control (HC) subjects (matched for age, sex, intelligence, and education) were intranasally administered either OT (24 IU) or a placebo 45min before participating in an emotion classification paradigm with fearful, angry, and happy faces. We assessed the number of correct classifications and reaction times as indicators of emotion recognition ability. Significant group×substance×emotion interactions were found in correct classifications and reaction times. Compared to HC, individuals with ASPD showed deficits in recognizing fearful and happy faces; these group differences were no longer observable under OT. Additionally, reaction times for angry faces differed significantly between the ASPD and HC group in the placebo condition. This effect was mainly driven by longer reaction times in HC subjects after placebo administration compared to OT administration while individuals with ASPD revealed descriptively the contrary response pattern. Our data indicate an improvement of the recognition of fearful and happy facial expressions by OT in young adults with ASPD. Particularly the increased recognition of facial fear is of high importance since the correct perception of distress signals in others is thought to inhibit aggression. Beneficial effects of OT might be further mediated by improved recognition of facial happiness probably reflecting increased social reward responsiveness. Copyright © 2017. Published by Elsevier Ltd.

  13. People with chronic facial pain perform worse than controls at a facial emotion recognition task, but it is not all about the emotion.

    Science.gov (United States)

    von Piekartz, H; Wallwork, S B; Mohr, G; Butler, D S; Moseley, G L

    2015-04-01

    Alexithymia, or a lack of emotional awareness, is prevalent in some chronic pain conditions and has been linked to poor recognition of others' emotions. Recognising others' emotions from their facial expression involves both emotional and motor processing, but the possible contribution of motor disruption has not been considered. It is possible that poor performance on emotional recognition tasks could reflect problems with emotional processing, motor processing or both. We hypothesised that people with chronic facial pain would be less accurate in recognising others' emotions from facial expressions, would be less accurate in a motor imagery task involving the face, and that performance on both tasks would be positively related. A convenience sample of 19 people (15 females) with chronic facial pain and 19 gender-matched controls participated. They undertook two tasks; in the first task, they identified the facial emotion presented in a photograph. In the second, they identified whether the person in the image had a facial feature pointed towards their left or right side, a well-recognised paradigm to induce implicit motor imagery. People with chronic facial pain performed worse than controls at both tasks (Facially Expressed Emotion Labelling (FEEL) task P facial pain were worse than controls at both the FEEL emotion recognition task and the left/right facial expression task and performance covaried within participants. We propose that disrupted motor processing may underpin or at least contribute to the difficulty that facial pain patients have in emotion recognition and that further research that tests this proposal is warranted. © 2014 John Wiley & Sons Ltd.

  14. Obligatory and facultative brain regions for voice-identity recognition

    Science.gov (United States)

    Roswandowitz, Claudia; Kappes, Claudia; Obrig, Hellmuth; von Kriegstein, Katharina

    2018-01-01

    Abstract Recognizing the identity of others by their voice is an important skill for social interactions. To date, it remains controversial which parts of the brain are critical structures for this skill. Based on neuroimaging findings, standard models of person-identity recognition suggest that the right temporal lobe is the hub for voice-identity recognition. Neuropsychological case studies, however, reported selective deficits of voice-identity recognition in patients predominantly with right inferior parietal lobe lesions. Here, our aim was to work towards resolving the discrepancy between neuroimaging studies and neuropsychological case studies to find out which brain structures are critical for voice-identity recognition in humans. We performed a voxel-based lesion-behaviour mapping study in a cohort of patients (n = 58) with unilateral focal brain lesions. The study included a comprehensive behavioural test battery on voice-identity recognition of newly learned (voice-name, voice-face association learning) and familiar voices (famous voice recognition) as well as visual (face-identity recognition) and acoustic control tests (vocal-pitch and vocal-timbre discrimination). The study also comprised clinically established tests (neuropsychological assessment, audiometry) and high-resolution structural brain images. The three key findings were: (i) a strong association between voice-identity recognition performance and right posterior/mid temporal and right inferior parietal lobe lesions; (ii) a selective association between right posterior/mid temporal lobe lesions and voice-identity recognition performance when face-identity recognition performance was factored out; and (iii) an association of right inferior parietal lobe lesions with tasks requiring the association between voices and faces but not voices and names. The results imply that the right posterior/mid temporal lobe is an obligatory structure for voice-identity recognition, while the inferior parietal

  15. Development of Emotional Facial Recognition in Late Childhood and Adolescence

    Science.gov (United States)

    Thomas, Laura A.; De Bellis, Michael D.; Graham, Reiko; Labar, Kevin S.

    2007-01-01

    The ability to interpret emotions in facial expressions is crucial for social functioning across the lifespan. Facial expression recognition develops rapidly during infancy and improves with age during the preschool years. However, the developmental trajectory from late childhood to adulthood is less clear. We tested older children, adolescents…

  16. The Change in Facial Emotion Recognition Ability in Inpatients with Treatment Resistant Schizophrenia After Electroconvulsive Therapy.

    Science.gov (United States)

    Dalkıran, Mihriban; Tasdemir, Akif; Salihoglu, Tamer; Emul, Murat; Duran, Alaattin; Ugur, Mufit; Yavuz, Ruhi

    2017-09-01

    People with schizophrenia have impairments in emotion recognition along with other social cognitive deficits. In the current study, we aimed to investigate the immediate benefits of ECT on facial emotion recognition ability. Thirty-two treatment resistant patients with schizophrenia who have been indicated for ECT enrolled in the study. Facial emotion stimuli were a set of 56 photographs that depicted seven basic emotions: sadness, anger, happiness, disgust, surprise, fear, and neutral faces. The average age of the participants was 33.4 ± 10.5 years. The rate of recognizing the disgusted facial expression increased significantly after ECT (p facial expressions (p > 0.05). After the ECT, the time period of responding to the fear and happy facial expressions were significantly shorter (p Facial emotion recognition ability is an important social cognitive skill for social harmony, proper relation and living independently. At least, the ECT sessions do not seem to affect facial emotion recognition ability negatively and seem to improve identifying disgusted facial emotion which is related with dopamine enriched regions in brain.

  17. Theory of mind as a mediator of reasoning and facial emotion recognition: findings from 200 healthy people.

    Science.gov (United States)

    Lee, Seul Bee; Koo, Se Jun; Song, Yun Young; Lee, Mi Kyung; Jeong, Yu-Jin; Kwon, Catherine; Park, Kyoung Ri; Park, Jin Young; Kang, Jee In; Lee, Eun; An, Suk Kyoon

    2014-04-01

    It was proposed that the ability to recognize facial emotions is closely related to complex neurocognitive processes and/or skills related to theory of mind (ToM). This study examines whether ToM skills mediate the relationship between higher neurocognitive functions, such as reasoning ability, and facial emotion recognition. A total of 200 healthy subjects (101 males, 99 females) were recruited. Facial emotion recognition was measured through the use of 64 facial emotional stimuli that were selected from photographs from the Korean Facial Expressions of Emotion (KOFEE). Participants were requested to complete the Theory of Mind Picture Stories task and Standard Progressive Matrices (SPM). Multiple regression analysis showed that the SPM score (t=3.19, p=0.002, β=0.22) and the overall ToM score (t=2.56, p=0.011, β=0.18) were primarily associated with a total hit rate (%) of the emotion recognition task. Hierarchical regression analysis through a three-step mediation model showed that ToM may partially mediate the relationship between SPM and performance on facial emotion recognition. These findings imply that higher neurocognitive functioning, inclusive of reasoning, may not only directly contribute towards facial emotion recognition but also influence ToM, which in turn, influences facial emotion recognition. These findings are particularly true for healthy young people.

  18. Recognition of facial expressions and prosodic cues with graded emotional intensities in adults with Asperger syndrome.

    Science.gov (United States)

    Doi, Hirokazu; Fujisawa, Takashi X; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-09-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group difference in facial expression recognition was prominent for stimuli with low or intermediate emotional intensities. In contrast to this, the individuals with Asperger syndrome exhibited lower recognition accuracy than typically-developed controls mainly for emotional prosody with high emotional intensity. In facial expression recognition, Asperger and control groups showed an inversion effect for all categories. The magnitude of this effect was less in the Asperger group for angry and sad expressions, presumably attributable to reduced recruitment of the configural mode of face processing. The individuals with Asperger syndrome outperformed the control participants in recognizing inverted sad expressions, indicating enhanced processing of local facial information representing sad emotion. These results suggest that the adults with Asperger syndrome rely on modality-specific strategies in emotion recognition from facial expression and prosodic information.

  19. Brain correlates of musical and facial emotion recognition: evidence from the dementias.

    Science.gov (United States)

    Hsieh, S; Hornberger, M; Piguet, O; Hodges, J R

    2012-07-01

    The recognition of facial expressions of emotion is impaired in semantic dementia (SD) and is associated with right-sided brain atrophy in areas known to be involved in emotion processing, notably the amygdala. Whether patients with SD also experience difficulty recognizing emotions conveyed by other media, such as music, is unclear. Prior studies have used excerpts of known music from classical or film repertoire but not unfamiliar melodies designed to convey distinct emotions. Patients with SD (n = 11), Alzheimer's disease (n = 12) and healthy control participants (n = 20) underwent tests of emotion recognition in two modalities: unfamiliar musical tunes and unknown faces as well as volumetric MRI. Patients with SD were most impaired with the recognition of facial and musical emotions, particularly for negative emotions. Voxel-based morphometry showed that the labelling of emotions, regardless of modality, correlated with the degree of atrophy in the right temporal pole, amygdala and insula. The recognition of musical (but not facial) emotions was also associated with atrophy of the left anterior and inferior temporal lobe, which overlapped with regions correlating with standardized measures of verbal semantic memory. These findings highlight the common neural substrates supporting the processing of emotions by facial and musical stimuli but also indicate that the recognition of emotions from music draws upon brain regions that are associated with semantics in language. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. [Recognition of facial expression of emotions in Parkinson's disease: a theoretical review].

    Science.gov (United States)

    Alonso-Recio, L; Serrano-Rodriguez, J M; Carvajal-Molina, F; Loeches-Alonso, A; Martin-Plasencia, P

    2012-04-16

    Emotional facial expression is a basic guide during social interaction and, therefore, alterations in their expression or recognition are important limitations for communication. To examine facial expression recognition abilities and their possible impairment in Parkinson's disease. First, we review the studies on this topic which have not found entirely similar results. Second, we analyze the factors that may explain these discrepancies and, in particular, as third objective, we consider the relationship between emotional recognition problems and cognitive impairment associated with the disease. Finally, we propose alternatives strategies for the development of studies that could clarify the state of these abilities in Parkinson's disease. Most studies suggest deficits in facial expression recognition, especially in those with negative emotional content. However, it is possible that these alterations are related to those that also appear in the course of the disease in other perceptual and executive processes. To advance in this issue, we consider necessary to design emotional recognition studies implicating differentially the executive or visuospatial processes, and/or contrasting cognitive abilities with facial expressions and non emotional stimuli. The precision of the status of these abilities, as well as increase our knowledge of the functional consequences of the characteristic brain damage in the disease, may indicate if we should pay special attention in their rehabilitation inside the programs implemented.

  1. Examining speed of processing of facial emotion recognition in individuals at ultra-high risk for psychosis

    DEFF Research Database (Denmark)

    Glenthøj, Louise Birkedal; Fagerlund, Birgitte; Bak, Nikolaj

    2018-01-01

    Emotion recognition is an aspect of social cognition that may be a key predictor of functioning and transition to psychosis in individuals at ultra-high risk (UHR) for psychosis ( Allott et al., 2014 ). UHR individuals exhibit deficits in accurately identifying facial emotions ( van Donkersgoed et...... al., 2015 ), but other potential anomalies in facial emotion recognition are largely unexplored. This study aimed to extend current knowledge on emotion recognition deficits in UHR individuals by examining: 1) whether UHR would display significantly slower facial emotion recognition than healthy...... controls, 2) whether an association between emotion recognition accuracy and emotion recognition latency is present in UHR, 3) the relationships between emotion recognition accuracy, neurocognition and psychopathology in UHR....

  2. The look of fear and anger: facial maturity modulates recognition of fearful and angry expressions.

    Science.gov (United States)

    Sacco, Donald F; Hugenberg, Kurt

    2009-02-01

    The current series of studies provide converging evidence that facial expressions of fear and anger may have co-evolved to mimic mature and babyish faces in order to enhance their communicative signal. In Studies 1 and 2, fearful and angry facial expressions were manipulated to have enhanced babyish features (larger eyes) or enhanced mature features (smaller eyes) and in the context of a speeded categorization task in Study 1 and a visual noise paradigm in Study 2, results indicated that larger eyes facilitated the recognition of fearful facial expressions, while smaller eyes facilitated the recognition of angry facial expressions. Study 3 manipulated facial roundness, a stable structure that does not vary systematically with expressions, and found that congruency between maturity and expression (narrow face-anger; round face-fear) facilitated expression recognition accuracy. Results are discussed as representing a broad co-evolutionary relationship between facial maturity and fearful and angry facial expressions. (c) 2009 APA, all rights reserved

  3. Obligatory and facultative brain regions for voice-identity recognition.

    Science.gov (United States)

    Roswandowitz, Claudia; Kappes, Claudia; Obrig, Hellmuth; von Kriegstein, Katharina

    2018-01-01

    Recognizing the identity of others by their voice is an important skill for social interactions. To date, it remains controversial which parts of the brain are critical structures for this skill. Based on neuroimaging findings, standard models of person-identity recognition suggest that the right temporal lobe is the hub for voice-identity recognition. Neuropsychological case studies, however, reported selective deficits of voice-identity recognition in patients predominantly with right inferior parietal lobe lesions. Here, our aim was to work towards resolving the discrepancy between neuroimaging studies and neuropsychological case studies to find out which brain structures are critical for voice-identity recognition in humans. We performed a voxel-based lesion-behaviour mapping study in a cohort of patients (n = 58) with unilateral focal brain lesions. The study included a comprehensive behavioural test battery on voice-identity recognition of newly learned (voice-name, voice-face association learning) and familiar voices (famous voice recognition) as well as visual (face-identity recognition) and acoustic control tests (vocal-pitch and vocal-timbre discrimination). The study also comprised clinically established tests (neuropsychological assessment, audiometry) and high-resolution structural brain images. The three key findings were: (i) a strong association between voice-identity recognition performance and right posterior/mid temporal and right inferior parietal lobe lesions; (ii) a selective association between right posterior/mid temporal lobe lesions and voice-identity recognition performance when face-identity recognition performance was factored out; and (iii) an association of right inferior parietal lobe lesions with tasks requiring the association between voices and faces but not voices and names. The results imply that the right posterior/mid temporal lobe is an obligatory structure for voice-identity recognition, while the inferior parietal lobe is

  4. Mapping structural covariance networks of facial emotion recognition in early psychosis: A pilot study.

    Science.gov (United States)

    Buchy, Lisa; Barbato, Mariapaola; Makowski, Carolina; Bray, Signe; MacMaster, Frank P; Deighton, Stephanie; Addington, Jean

    2017-11-01

    People with psychosis show deficits recognizing facial emotions and disrupted activation in the underlying neural circuitry. We evaluated associations between facial emotion recognition and cortical thickness using a correlation-based approach to map structural covariance networks across the brain. Fifteen people with an early psychosis provided magnetic resonance scans and completed the Penn Emotion Recognition and Differentiation tasks. Fifteen historical controls provided magnetic resonance scans. Cortical thickness was computed using CIVET and analyzed with linear models. Seed-based structural covariance analysis was done using the mapping anatomical correlations across the cerebral cortex methodology. To map structural covariance networks involved in facial emotion recognition, the right somatosensory cortex and bilateral fusiform face areas were selected as seeds. Statistics were run in SurfStat. Findings showed increased cortical covariance between the right fusiform face region seed and right orbitofrontal cortex in controls than early psychosis subjects. Facial emotion recognition scores were not significantly associated with thickness in any region. A negative effect of Penn Differentiation scores on cortical covariance was seen between the left fusiform face area seed and right superior parietal lobule in early psychosis subjects. Results suggest that facial emotion recognition ability is related to covariance in a temporal-parietal network in early psychosis. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Holistic face processing can inhibit recognition of forensic facial composites.

    Science.gov (United States)

    McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H

    2016-04-01

    Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format. (c) 2016 APA, all rights reserved).

  6. Effects of facial emotion recognition remediation on visual scanning of novel face stimuli.

    Science.gov (United States)

    Marsh, Pamela J; Luckett, Gemma; Russell, Tamara; Coltheart, Max; Green, Melissa J

    2012-11-01

    Previous research shows that emotion recognition in schizophrenia can be improved with targeted remediation that draws attention to important facial features (eyes, nose, mouth). Moreover, the effects of training have been shown to last for up to one month after training. The aim of this study was to investigate whether improved emotion recognition of novel faces is associated with concomitant changes in visual scanning of these same novel facial expressions. Thirty-nine participants with schizophrenia received emotion recognition training using Ekman's Micro-Expression Training Tool (METT), with emotion recognition and visual scanpath (VSP) recordings to face stimuli collected simultaneously. Baseline ratings of interpersonal and cognitive functioning were also collected from all participants. Post-METT training, participants showed changes in foveal attention to the features of facial expressions of emotion not used in METT training, which were generally consistent with the information about important features from the METT. In particular, there were changes in how participants looked at the features of facial expressions of emotion surprise, disgust, fear, happiness, and neutral, demonstrating that improved emotion recognition is paralleled by changes in the way participants with schizophrenia viewed novel facial expressions of emotion. However, there were overall decreases in foveal attention to sad and neutral faces that indicate more intensive instruction might be needed for these faces during training. Most importantly, the evidence shows that participant gender may affect training outcomes. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. The recognition of facial expressions: an investigation of the influence of age and cognition.

    Science.gov (United States)

    Horning, Sheena M; Cornwell, R Elisabeth; Davis, Hasker P

    2012-11-01

    The present study aimed to investigate changes in facial expression recognition across the lifespan, as well as to determine the influence of fluid intelligence, processing speed, and memory on this ability. Peak performance in the ability to identify facial affect was found to occur in middle-age, with the children and older adults performing the poorest. Specifically, older adults were impaired in their ability to identify fear, sadness, and happiness, but had preserved recognition of anger, disgust, and surprise. Analyses investigating the influence of cognition on emotion recognition demonstrated that cognitive abilities contribute to performance, especially for participants over age 45. However, the cognitive functions did not fully account for the older adults' impairments on expression recognition. Overall, the age-related deficits in facial expression recognition have implications for older adults' use of non-verbal communicative information.

  8. Automated facial recognition of manually generated clay facial approximations: Potential application in unidentified persons data repositories.

    Science.gov (United States)

    Parks, Connie L; Monson, Keith L

    2018-01-01

    This research examined how accurately 2D images (i.e., photographs) of 3D clay facial approximations were matched to corresponding photographs of the approximated individuals using an objective automated facial recognition system. Irrespective of search filter (i.e., blind, sex, or ancestry) or rank class (R 1 , R 10 , R 25 , and R 50 ) employed, few operationally informative results were observed. In only a single instance of 48 potential match opportunities was a clay approximation matched to a corresponding life photograph within the top 50 images (R 50 ) of a candidate list, even with relatively small gallery sizes created from the application of search filters (e.g., sex or ancestry search restrictions). Increasing the candidate lists to include the top 100 images (R 100 ) resulted in only two additional instances of correct match. Although other untested variables (e.g., approximation method, 2D photographic process, and practitioner skill level) may have impacted the observed results, this study suggests that 2D images of manually generated clay approximations are not readily matched to life photos by automated facial recognition systems. Further investigation is necessary in order to identify the underlying cause(s), if any, of the poor recognition results observed in this study (e.g., potential inferior facial feature detection and extraction). Additional inquiry exploring prospective remedial measures (e.g., stronger feature differentiation) is also warranted, particularly given the prominent use of clay approximations in unidentified persons casework. Copyright © 2017. Published by Elsevier B.V.

  9. Theory of Mind as a Mediator of Reasoning and Facial Emotion Recognition: Findings from 200 Healthy People

    Science.gov (United States)

    Lee, Seul Bee; Koo, Se Jun; Song, Yun Young; Lee, Mi Kyung; Jeong, Yu-Jin; Kwon, Catherine; Park, Kyoung Ri; Kang, Jee In; Lee, Eun

    2014-01-01

    Objective It was proposed that the ability to recognize facial emotions is closely related to complex neurocognitive processes and/or skills related to theory of mind (ToM). This study examines whether ToM skills mediate the relationship between higher neurocognitive functions, such as reasoning ability, and facial emotion recognition. Methods A total of 200 healthy subjects (101 males, 99 females) were recruited. Facial emotion recognition was measured through the use of 64 facial emotional stimuli that were selected from photographs from the Korean Facial Expressions of Emotion (KOFEE). Participants were requested to complete the Theory of Mind Picture Stories task and Standard Progressive Matrices (SPM). Results Multiple regression analysis showed that the SPM score (t=3.19, p=0.002, β=0.22) and the overall ToM score (t=2.56, p=0.011, β=0.18) were primarily associated with a total hit rate (%) of the emotion recognition task. Hierarchical regression analysis through a three-step mediation model showed that ToM may partially mediate the relationship between SPM and performance on facial emotion recognition. Conclusion These findings imply that higher neurocognitive functioning, inclusive of reasoning, may not only directly contribute towards facial emotion recognition but also influence ToM, which in turn, influences facial emotion recognition. These findings are particularly true for healthy young people. PMID:24843363

  10. Facial emotion recognition in adolescents with personality pathology

    NARCIS (Netherlands)

    Berenschot, Fleur; Van Aken, Marcel A G|info:eu-repo/dai/nl/081831218; Hessels, Christel; De Castro, Bram Orobio|info:eu-repo/dai/nl/166985422; Pijl, Ysbrand; Montagne, Barbara; Van Voorst, Guus

    2014-01-01

    It has been argued that a heightened emotional sensitivity interferes with the cognitive processing of facial emotion recognition and may explain the intensified emotional reactions to external emotional stimuli of adults with personality pathology, such as borderline personality disorder (BPD).

  11. Facial Emotion Recognition Impairments are Associated with Brain Volume Abnormalities in Individuals with HIV

    Science.gov (United States)

    Clark, Uraina S.; Walker, Keenan A.; Cohen, Ronald A.; Devlin, Kathryn N.; Folkers, Anna M.; Pina, Mathew M.; Tashima, Karen T.

    2015-01-01

    Impaired facial emotion recognition abilities in HIV+ patients are well documented, but little is known about the neural etiology of these difficulties. We examined the relation of facial emotion recognition abilities to regional brain volumes in 44 HIV-positive (HIV+) and 44 HIV-negative control (HC) adults. Volumes of structures implicated in HIV− associated neuropathology and emotion recognition were measured on MRI using an automated segmentation tool. Relative to HC, HIV+ patients demonstrated emotion recognition impairments for fearful expressions, reduced anterior cingulate cortex (ACC) volumes, and increased amygdala volumes. In the HIV+ group, fear recognition impairments correlated significantly with ACC, but not amygdala volumes. ACC reductions were also associated with lower nadir CD4 levels (i.e., greater HIV-disease severity). These findings extend our understanding of the neurobiological substrates underlying an essential social function, facial emotion recognition, in HIV+ individuals and implicate HIV-related ACC atrophy in the impairment of these abilities. PMID:25744868

  12. Emotional recognition from dynamic facial, vocal and musical expressions following traumatic brain injury.

    Science.gov (United States)

    Drapeau, Joanie; Gosselin, Nathalie; Peretz, Isabelle; McKerral, Michelle

    2017-01-01

    To assess emotion recognition from dynamic facial, vocal and musical expressions in sub-groups of adults with traumatic brain injuries (TBI) of different severities and identify possible common underlying mechanisms across domains. Forty-one adults participated in this study: 10 with moderate-severe TBI, nine with complicated mild TBI, 11 with uncomplicated mild TBI and 11 healthy controls, who were administered experimental (emotional recognition, valence-arousal) and control tasks (emotional and structural discrimination) for each domain. Recognition of fearful faces was significantly impaired in moderate-severe and in complicated mild TBI sub-groups, as compared to those with uncomplicated mild TBI and controls. Effect sizes were medium-large. Participants with lower GCS scores performed more poorly when recognizing fearful dynamic facial expressions. Emotion recognition from auditory domains was preserved following TBI, irrespective of severity. All groups performed equally on control tasks, indicating no perceptual disorders. Although emotional recognition from vocal and musical expressions was preserved, no correlation was found across auditory domains. This preliminary study may contribute to improving comprehension of emotional recognition following TBI. Future studies of larger samples could usefully include measures of functional impacts of recognition deficits for fearful facial expressions. These could help refine interventions for emotional recognition following a brain injury.

  13. Multimedia Content Development as a Facial Expression Datasets for Recognition of Human Emotions

    Science.gov (United States)

    Mamonto, N. E.; Maulana, H.; Liliana, D. Y.; Basaruddin, T.

    2018-02-01

    Datasets that have been developed before contain facial expression from foreign people. The development of multimedia content aims to answer the problems experienced by the research team and other researchers who will conduct similar research. The method used in the development of multimedia content as facial expression datasets for human emotion recognition is the Villamil-Molina version of the multimedia development method. Multimedia content developed with 10 subjects or talents with each talent performing 3 shots with each capturing talent having to demonstrate 19 facial expressions. After the process of editing and rendering, tests are carried out with the conclusion that the multimedia content can be used as a facial expression dataset for recognition of human emotions.

  14. Comparing the Recognition of Emotional Facial Expressions in Patients with

    Directory of Open Access Journals (Sweden)

    Abdollah Ghasempour

    2014-05-01

    Full Text Available Background: Recognition of emotional facial expressions is one of the psychological factors which involve in obsessive-compulsive disorder (OCD and major depressive disorder (MDD. The aim of present study was to compare the ability of recognizing emotional facial expressions in patients with Obsessive-Compulsive Disorder and major depressive disorder. Materials and Methods: The present study is a cross-sectional and ex-post facto investigation (causal-comparative method. Forty participants (20 patients with OCD, 20 patients with MDD were selected through available sampling method from the clients referred to Tabriz Bozorgmehr clinic. Data were collected through Structured Clinical Interview and Recognition of Emotional Facial States test. The data were analyzed utilizing MANOVA. Results: The obtained results showed that there is no significant difference between groups in the mean score of recognition emotional states of surprise, sadness, happiness and fear; but groups had a significant difference in the mean score of diagnosing disgust and anger states (p<0.05. Conclusion: Patients suffering from both OCD and MDD show equal ability to recognize surprise, sadness, happiness and fear. However, the former are less competent in recognizing disgust and anger than the latter.

  15. Neurocognition and symptoms identify links between facial recognition and emotion processing in schizophrenia: meta-analytic findings.

    Science.gov (United States)

    Ventura, Joseph; Wood, Rachel C; Jimenez, Amy M; Hellemann, Gerhard S

    2013-12-01

    In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? A meta-analysis of 102 studies (combined n=4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r=.51). In addition, the relationship between FR and EP through voice prosody (r=.58) is as strong as the relationship between FR and EP based on facial stimuli (r=.53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality - facial stimuli and voice prosody. The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. © 2013 Elsevier B.V. All rights reserved.

  16. Facial Emotion Recognition and Expression in Parkinson's Disease: An Emotional Mirror Mechanism?

    Science.gov (United States)

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J; Kilner, James

    2017-01-01

    Parkinson's disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. For emotion recognition, PD reported lower score than HC for Ekman total score (pemotions sub-scores happiness, fear, anger, sadness (pfacial emotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all pemotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). PD patients showed difficulties in recognizing emotional

  17. GENDER DIFFERENCES IN THE RECOGNITION OF FACIAL EXPRESSIONS OF EMOTION

    Directory of Open Access Journals (Sweden)

    CARLOS FELIPE PARDO-VÉLEZ

    2003-07-01

    Full Text Available Gender differences in the recognition of facial expressions of anger, happiness and sadness wereresearched in students 18-25 years of age. A reaction time procedure was used, and the percentage ofcorrect answers when recognizing was also measured. Though the work hypothesis expected genderdifferences in facial expression recognition, results suggest that these differences are not significant at alevel of 0.05%. Statistical analysis shows a greater easiness (at a non-significant level for women torecognize happiness expressions, and for men to recognize anger expressions. The implications ofthese data are discussed, and possible extensions of this investigation in terms of sample size andcollege major of the participants.

  18. Facial emotion recognition in Williams syndrome and Down syndrome: A matching and developmental study.

    Science.gov (United States)

    Martínez-Castilla, Pastora; Burt, Michael; Borgatti, Renato; Gagliardi, Chiara

    2015-01-01

    In this study both the matching and developmental trajectories approaches were used to clarify questions that remain open in the literature on facial emotion recognition in Williams syndrome (WS) and Down syndrome (DS). The matching approach showed that individuals with WS or DS exhibit neither proficiency for the expression of happiness nor specific impairments for negative emotions. Instead, they present the same pattern of emotion recognition as typically developing (TD) individuals. Thus, the better performance on the recognition of positive compared to negative emotions usually reported in WS and DS is not specific of these populations but seems to represent a typical pattern. Prior studies based on the matching approach suggested that the development of facial emotion recognition is delayed in WS and atypical in DS. Nevertheless, and even though performance levels were lower in DS than in WS, the developmental trajectories approach used in this study evidenced that not only individuals with DS but also those with WS present atypical development in facial emotion recognition. Unlike in the TD participants, where developmental changes were observed along with age, in the WS and DS groups, the development of facial emotion recognition was static. Both individuals with WS and those with DS reached an early maximum developmental level due to cognitive constraints.

  19. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    Science.gov (United States)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  20. Emotional facial expressions reduce neural adaptation to face identity.

    Science.gov (United States)

    Gerlicher, Anna M V; van Loon, Anouk M; Scholte, H Steven; Lamme, Victor A F; van der Leij, Andries R

    2014-05-01

    In human social interactions, facial emotional expressions are a crucial source of information. Repeatedly presented information typically leads to an adaptation of neural responses. However, processing seems sustained with emotional facial expressions. Therefore, we tested whether sustained processing of emotional expressions, especially threat-related expressions, would attenuate neural adaptation. Neutral and emotional expressions (happy, mixed and fearful) of same and different identity were presented at 3 Hz. We used electroencephalography to record the evoked steady-state visual potentials (ssVEP) and tested to what extent the ssVEP amplitude adapts to the same when compared with different face identities. We found adaptation to the identity of a neutral face. However, for emotional faces, adaptation was reduced, decreasing linearly with negative valence, with the least adaptation to fearful expressions. This short and straightforward method may prove to be a valuable new tool in the study of emotional processing.

  1. The first facial expression recognition and analysis challenge

    NARCIS (Netherlands)

    Valstar, Michel F.; Jiang, Bihan; Mehu, Marc; Pantic, Maja; Scherer, Klaus

    Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly

  2. Robust representation and recognition of facial emotions using extreme sparse learning.

    Science.gov (United States)

    Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang

    2015-07-01

    Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.

  3. Feature Fusion Algorithm for Multimodal Emotion Recognition from Speech and Facial Expression Signal

    Directory of Open Access Journals (Sweden)

    Han Zhiyan

    2016-01-01

    Full Text Available In order to overcome the limitation of single mode emotion recognition. This paper describes a novel multimodal emotion recognition algorithm, and takes speech signal and facial expression signal as the research subjects. First, fuse the speech signal feature and facial expression signal feature, get sample sets by putting back sampling, and then get classifiers by BP neural network (BPNN. Second, measure the difference between two classifiers by double error difference selection strategy. Finally, get the final recognition result by the majority voting rule. Experiments show the method improves the accuracy of emotion recognition by giving full play to the advantages of decision level fusion and feature level fusion, and makes the whole fusion process close to human emotion recognition more, with a recognition rate 90.4%.

  4. Evidence for Anger Saliency during the Recognition of Chimeric Facial Expressions of Emotions in Underage Ebola Survivors

    Directory of Open Access Journals (Sweden)

    Martina Ardizzi

    2017-06-01

    Full Text Available One of the crucial features defining basic emotions and their prototypical facial expressions is their value for survival. Childhood traumatic experiences affect the effective recognition of facial expressions of negative emotions, normally allowing the recruitment of adequate behavioral responses to environmental threats. Specifically, anger becomes an extraordinarily salient stimulus unbalancing victims’ recognition of negative emotions. Despite the plethora of studies on this topic, to date, it is not clear whether this phenomenon reflects an overall response tendency toward anger recognition or a selective proneness to the salience of specific facial expressive cues of anger after trauma exposure. To address this issue, a group of underage Sierra Leonean Ebola virus disease survivors (mean age 15.40 years, SE 0.35; years of schooling 8.8 years, SE 0.46; 14 males and a control group (mean age 14.55, SE 0.30; years of schooling 8.07 years, SE 0.30, 15 males performed a forced-choice chimeric facial expressions recognition task. The chimeric facial expressions were obtained pairing upper and lower half faces of two different negative emotions (selected from anger, fear and sadness for a total of six different combinations. Overall, results showed that upper facial expressive cues were more salient than lower facial expressive cues. This priority was lost among Ebola virus disease survivors for the chimeric facial expressions of anger. In this case, differently from controls, Ebola virus disease survivors recognized anger regardless of the upper or lower position of the facial expressive cues of this emotion. The present results demonstrate that victims’ performance in the recognition of the facial expression of anger does not reflect an overall response tendency toward anger recognition, but rather the specific greater salience of facial expressive cues of anger. Furthermore, the present results show that traumatic experiences deeply modify

  5. A computational model of the development of separate representations of facial identity and expression in the primate visual system.

    Science.gov (United States)

    Tromans, James Matthew; Harris, Mitchell; Stringer, Simon Maitland

    2011-01-01

    Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE) respond primarily to facial identity, while cells within the superior temporal sulcus (STS) respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC) of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs), with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.

  6. A computational model of the development of separate representations of facial identity and expression in the primate visual system.

    Directory of Open Access Journals (Sweden)

    James Matthew Tromans

    Full Text Available Experimental studies have provided evidence that the visual processing areas of the primate brain represent facial identity and facial expression within different subpopulations of neurons. For example, in non-human primates there is evidence that cells within the inferior temporal gyrus (TE respond primarily to facial identity, while cells within the superior temporal sulcus (STS respond to facial expression. More recently, it has been found that the orbitofrontal cortex (OFC of non-human primates contains some cells that respond exclusively to changes in facial identity, while other cells respond exclusively to facial expression. How might the primate visual system develop physically separate representations of facial identity and expression given that the visual system is always exposed to simultaneous combinations of facial identity and expression during learning? In this paper, a biologically plausible neural network model, VisNet, of the ventral visual pathway is trained on a set of carefully-designed cartoon faces with different identities and expressions. The VisNet model architecture is composed of a hierarchical series of four Self-Organising Maps (SOMs, with associative learning in the feedforward synaptic connections between successive layers. During learning, the network develops separate clusters of cells that respond exclusively to either facial identity or facial expression. We interpret the performance of the network in terms of the learning properties of SOMs, which are able to exploit the statistical indendependence between facial identity and expression.

  7. Modulation of α power and functional connectivity during facial affect recognition.

    Science.gov (United States)

    Popov, Tzvetan; Miller, Gregory A; Rockstroh, Brigitte; Weisz, Nathan

    2013-04-03

    Research has linked oscillatory activity in the α frequency range, particularly in sensorimotor cortex, to processing of social actions. Results further suggest involvement of sensorimotor α in the processing of facial expressions, including affect. The sensorimotor face area may be critical for perception of emotional face expression, but the role it plays is unclear. The present study sought to clarify how oscillatory brain activity contributes to or reflects processing of facial affect during changes in facial expression. Neuromagnetic oscillatory brain activity was monitored while 30 volunteers viewed videos of human faces that changed their expression from neutral to fearful, neutral, or happy expressions. Induced changes in α power during the different morphs, source analysis, and graph-theoretic metrics served to identify the role of α power modulation and cross-regional coupling by means of phase synchrony during facial affect recognition. Changes from neutral to emotional faces were associated with a 10-15 Hz power increase localized in bilateral sensorimotor areas, together with occipital power decrease, preceding reported emotional expression recognition. Graph-theoretic analysis revealed that, in the course of a trial, the balance between sensorimotor power increase and decrease was associated with decreased and increased transregional connectedness as measured by node degree. Results suggest that modulations in α power facilitate early registration, with sensorimotor cortex including the sensorimotor face area largely functionally decoupled and thereby protected from additional, disruptive input and that subsequent α power decrease together with increased connectedness of sensorimotor areas facilitates successful facial affect recognition.

  8. Facial expression recognition based on weber local descriptor and sparse representation

    Science.gov (United States)

    Ouyang, Yan

    2018-03-01

    Automatic facial expression recognition has been one of the research hotspots in the area of computer vision for nearly ten years. During the decade, many state-of-the-art methods have been proposed which perform very high accurate rate based on the face images without any interference. Nowadays, many researchers begin to challenge the task of classifying the facial expression images with corruptions and occlusions and the Sparse Representation based Classification framework has been wildly used because it can robust to the corruptions and occlusions. Therefore, this paper proposed a novel facial expression recognition method based on Weber local descriptor (WLD) and Sparse representation. The method includes three parts: firstly the face images are divided into many local patches, and then the WLD histograms of each patch are extracted, finally all the WLD histograms features are composed into a vector and combined with SRC to classify the facial expressions. The experiment results on the Cohn-Kanade database show that the proposed method is robust to occlusions and corruptions.

  9. The familial basis of facial emotion recognition deficits in adolescents with conduct disorder and their unaffected relatives.

    Science.gov (United States)

    Sully, K; Sonuga-Barke, E J S; Fairchild, G

    2015-07-01

    There is accumulating evidence of impairments in facial emotion recognition in adolescents with conduct disorder (CD). However, the majority of studies in this area have only been able to demonstrate an association, rather than a causal link, between emotion recognition deficits and CD. To move closer towards understanding the causal pathways linking emotion recognition problems with CD, we studied emotion recognition in the unaffected first-degree relatives of CD probands, as well as those with a diagnosis of CD. Using a family-based design, we investigated facial emotion recognition in probands with CD (n = 43), their unaffected relatives (n = 21), and healthy controls (n = 38). We used the Emotion Hexagon task, an alternative forced-choice task using morphed facial expressions depicting the six primary emotions, to assess facial emotion recognition accuracy. Relative to controls, the CD group showed impaired recognition of anger, fear, happiness, sadness and surprise (all p emotion recognition deficits are present in adolescents who are at increased familial risk for developing antisocial behaviour, as well as those who have already developed CD. Consequently, impaired emotion recognition appears to be a viable familial risk marker or candidate endophenotype for CD.

  10. Shared Gaussian Process Latent Variable Model for Multi-view Facial Expression Recognition

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    Facial-expression data often appear in multiple views either due to head-movements or the camera position. Existing methods for multi-view facial expression recognition perform classification of the target expressions either by using classifiers learned separately for each view or by using a single

  11. Individual differences in the recognition of facial expressions: an event-related potentials study.

    Directory of Open Access Journals (Sweden)

    Yoshiyuki Tamamiya

    Full Text Available Previous studies have shown that early posterior components of event-related potentials (ERPs are modulated by facial expressions. The goal of the current study was to investigate individual differences in the recognition of facial expressions by examining the relationship between ERP components and the discrimination of facial expressions. Pictures of 3 facial expressions (angry, happy, and neutral were presented to 36 young adults during ERP recording. Participants were asked to respond with a button press as soon as they recognized the expression depicted. A multiple regression analysis, where ERP components were set as predictor variables, assessed hits and reaction times in response to the facial expressions as dependent variables. The N170 amplitudes significantly predicted for accuracy of angry and happy expressions, and the N170 latencies were predictive for accuracy of neutral expressions. The P2 amplitudes significantly predicted reaction time. The P2 latencies significantly predicted reaction times only for neutral faces. These results suggest that individual differences in the recognition of facial expressions emerge from early components in visual processing.

  12. Individual differences in the recognition of facial expressions: an event-related potentials study.

    Science.gov (United States)

    Tamamiya, Yoshiyuki; Hiraki, Kazuo

    2013-01-01

    Previous studies have shown that early posterior components of event-related potentials (ERPs) are modulated by facial expressions. The goal of the current study was to investigate individual differences in the recognition of facial expressions by examining the relationship between ERP components and the discrimination of facial expressions. Pictures of 3 facial expressions (angry, happy, and neutral) were presented to 36 young adults during ERP recording. Participants were asked to respond with a button press as soon as they recognized the expression depicted. A multiple regression analysis, where ERP components were set as predictor variables, assessed hits and reaction times in response to the facial expressions as dependent variables. The N170 amplitudes significantly predicted for accuracy of angry and happy expressions, and the N170 latencies were predictive for accuracy of neutral expressions. The P2 amplitudes significantly predicted reaction time. The P2 latencies significantly predicted reaction times only for neutral faces. These results suggest that individual differences in the recognition of facial expressions emerge from early components in visual processing.

  13. Facial emotion recognition, face scan paths, and face perception in children with neurofibromatosis type 1.

    Science.gov (United States)

    Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M

    2017-05-01

    This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Facial Emotion Recognition and Expression in Parkinson’s Disease: An Emotional Mirror Mechanism?

    Science.gov (United States)

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J.; Kilner, James

    2017-01-01

    Background and aim Parkinson’s disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Methods Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. Results For emotion recognition, PD reported lower score than HC for Ekman total score (pemotions sub-scores happiness, fear, anger, sadness (pfacial emotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all pemotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). Conclusions PD

  15. Real Time Facial Expression Recognition Using Webcam and SDK Affectiva

    Directory of Open Access Journals (Sweden)

    Martin Magdin

    2018-06-01

    Full Text Available Facial expression is an essential part of communication. For this reason, the issue of human emotions evaluation using a computer is a very interesting topic, which has gained more and more attention in recent years. It is mainly related to the possibility of applying facial expression recognition in many fields such as HCI, video games, virtual reality, and analysing customer satisfaction etc. Emotions determination (recognition process is often performed in 3 basic phases: face detection, facial features extraction, and last stage - expression classification. Most often you can meet the so-called Ekman’s classification of 6 emotional expressions (or 7 - neutral expression as well as other types of classification - the Russell circular model, which contains up to 24 or the Plutchik’s Wheel of Emotions. The methods used in the three phases of the recognition process have not only improved over the last 60 years, but new methods and algorithms have also emerged that can determine the ViolaJones detector with greater accuracy and lower computational demands. Therefore, there are currently various solutions in the form of the Software Development Kit (SDK. In this publication, we point to the proposition and creation of our system for real-time emotion classification. Our intention was to create a system that would use all three phases of the recognition process, work fast and stable in real time. That’s why we’ve decided to take advantage of existing Affectiva SDKs. By using the classic webcamera we can detect facial landmarks on the image automatically using the Software Development Kit (SDK from Affectiva. Geometric feature based approach is used for feature extraction. The distance between landmarks is used as a feature, and for selecting an optimal set of features, the brute force method is used. The proposed system uses neural network algorithm for classification. The proposed system recognizes 6 (respectively 7 facial expressions

  16. An Age-Related Dissociation of Short-Term Memory for Facial Identity and Facial Emotional Expression.

    Science.gov (United States)

    Hartley, Alan A; Ravich, Zoe; Stringer, Sarah; Wiley, Katherine

    2015-09-01

    Memory for both facial emotional expression and facial identity was explored in younger and older adults in 3 experiments using a delayed match-to-sample procedure. Memory sets of 1, 2, or 3 faces were presented, which were followed by a probe after a 3-s retention interval. There was very little difference between younger and older adults in memory for emotional expressions, but memory for identity was substantially impaired in the older adults. Possible explanations for spared memory for emotional expressions include socioemotional selectivity theory as well as the existence of overlapping yet distinct brain networks for processing of different emotions. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  17. Media identities and media-influenced indentifications Visibility and identity recognition in the media

    Directory of Open Access Journals (Sweden)

    Víctor Fco. Sampedro Blanco

    2004-10-01

    Full Text Available The media establish, in large part, the patterns of visibility and public recognition of collective identities. We define media identities as those that are the object of production and diffusion by the media. From this discourse, the communities and individuals elaborate media-influenced identifications; that is, processes of recognition or banishment; (rearticulating the identity markers that the media offer with other cognitive and emotional sources. The generation and appropriation of the identities are subjected to a media hierarchisation that influences their normalisation or marginalisation. The identities presented by the media and assumed by the audience as part of the official, hegemonic discourse are normalised, whereas the identities and identifications formulated in popular and minority terms are marginalised. After presenting this conceptual and analytical framework, this study attempts to outline the logics that condition the presentation, on the one hand, andthe public recognition, on the other hand, of contemporary identities.

  18. A model based method for automatic facial expression recognition

    NARCIS (Netherlands)

    Kuilenburg, H. van; Wiering, M.A.; Uyl, M. den

    2006-01-01

    Automatic facial expression recognition is a research topic with interesting applications in the field of human-computer interaction, psychology and product marketing. The classification accuracy for an automatic system which uses static images as input is however largely limited by the image

  19. Gender identity rather than sexual orientation impacts on facial preferences.

    Science.gov (United States)

    Ciocca, Giacomo; Limoncin, Erika; Cellerino, Alessandro; Fisher, Alessandra D; Gravina, Giovanni Luca; Carosa, Eleonora; Mollaioli, Daniele; Valenzano, Dario R; Mennucci, Andrea; Bandini, Elisa; Di Stasi, Savino M; Maggi, Mario; Lenzi, Andrea; Jannini, Emmanuele A

    2014-10-01

    Differences in facial preferences between heterosexual men and women are well documented. It is still a matter of debate, however, how variations in sexual identity/sexual orientation may modify the facial preferences. This study aims to investigate the facial preferences of male-to-female (MtF) individuals with gender dysphoria (GD) and the influence of short-term/long-term relationships on facial preference, in comparison with healthy subjects. Eighteen untreated MtF subjects, 30 heterosexual males, 64 heterosexual females, and 42 homosexual males from university students/staff, at gay events, and in Gender Clinics were shown a composite male or female face. The sexual dimorphism of these pictures was stressed or reduced in a continuous fashion through an open-source morphing program with a sequence of 21 pictures of the same face warped from a feminized to a masculinized shape. An open-source morphing program (gtkmorph) based on the X-Morph algorithm. MtF GD subjects and heterosexual females showed the same pattern of preferences: a clear preference for less dimorphic (more feminized) faces for both short- and long-term relationships. Conversely, both heterosexual and homosexual men selected significantly much more dimorphic faces, showing a preference for hyperfeminized and hypermasculinized faces, respectively. These data show that the facial preferences of MtF GD individuals mirror those of the sex congruent with their gender identity. Conversely, heterosexual males trace the facial preferences of homosexual men, indicating that changes in sexual orientation do not substantially affect preference for the most attractive faces. © 2014 International Society for Sexual Medicine.

  20. In-the-wild facial expression recognition in extreme poses

    Science.gov (United States)

    Yang, Fei; Zhang, Qian; Zheng, Chi; Qiu, Guoping

    2018-04-01

    In the computer research area, facial expression recognition is a hot research problem. Recent years, the research has moved from the lab environment to in-the-wild circumstances. It is challenging, especially under extreme poses. But current expression detection systems are trying to avoid the pose effects and gain the general applicable ability. In this work, we solve the problem in the opposite approach. We consider the head poses and detect the expressions within special head poses. Our work includes two parts: detect the head pose and group it into one pre-defined head pose class; do facial expression recognize within each pose class. Our experiments show that the recognition results with pose class grouping are much better than that of direct recognition without considering poses. We combine the hand-crafted features, SIFT, LBP and geometric feature, with deep learning feature as the representation of the expressions. The handcrafted features are added into the deep learning framework along with the high level deep learning features. As a comparison, we implement SVM and random forest to as the prediction models. To train and test our methodology, we labeled the face dataset with 6 basic expressions.

  1. Multi-output Laplacian Dynamic Ordinal Regression for Facial Expression Recognition and Intensity Estimation

    NARCIS (Netherlands)

    Rudovic, Ognjen; Pavlovic, Vladimir; Pantic, Maja

    2012-01-01

    Automated facial expression recognition has received increased attention over the past two decades. Existing works in the field usually do not encode either the temporal evolution or the intensity of the observed facial displays. They also fail to jointly model multidimensional (multi-class)

  2. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    Science.gov (United States)

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Batch metadata assignment to archival photograph collections using facial recognition software

    Directory of Open Access Journals (Sweden)

    Kyle Banerjee

    2013-07-01

    Full Text Available Useful metadata is essential to giving individual meaning and value within the context of a greater image collection as well as making them more discoverable. However, often little information is available about the photos themselves, so adding consistent metadata to large collections of digital and digitized photographs is a time consuming process requiring highly experienced staff. By using facial recognition software, staff can identify individuals more quickly and reliably. Knowledge of individuals in photos helps staff determine when and where photos are taken and also improves understanding of the subject matter. This article demonstrates simple techniques for using facial recognition software and command line tools to assign, modify, and read metadata for large archival photograph collections.

  4. Novel methods for real-time 3D facial recognition

    OpenAIRE

    Rodrigues, Marcos; Robinson, Alan

    2010-01-01

    In this paper we discuss our approach to real-time 3D face recognition. We argue the need for real time operation in a realistic scenario and highlight the required pre- and post-processing operations for effective 3D facial recognition. We focus attention to some operations including face and eye detection, and fast post-processing operations such as hole filling, mesh smoothing and noise removal. We consider strategies for hole filling such as bilinear and polynomial interpolation and Lapla...

  5. Emotion Index of Cover Song Music Video Clips based on Facial Expression Recognition

    DEFF Research Database (Denmark)

    Kavallakis, George; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2017-01-01

    This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use...... of a neural network system using the features extracted by the SIFT algorithm. Also we support the need of this fusion of different expression recognition algorithms, because of the way that emotions are linked to facial expressions in music video clips....

  6. Spatiotemporal Analysis of RGB-D-T Facial Images for Multimodal Pain Level Recognition

    DEFF Research Database (Denmark)

    Irani, Ramin; Nasrollahi, Kamal; Oliu Simon, Marc

    2015-01-01

    facial images for pain detection and pain intensity level recognition. For this purpose, we extract energies released by facial pixels using a spatiotemporal filter. Experiments on a group of 12 elderly people applying the multimodal approach show that the proposed method successfully detects pain...

  7. Environmental Identity Development through Social Interactions, Action, and Recognition

    Science.gov (United States)

    Stapleton, Sarah Riggs

    2015-01-01

    This article uses sociocultural identity theory to explore how practice, action, and recognition can facilitate environmental identity development. Recognition, a construct not previously explored in environmental identity literature, is particularly examined. The study is based on a group of diverse teens who traveled to South Asia to participate…

  8. Recognition of facial and musical emotions in Parkinson's disease.

    Science.gov (United States)

    Saenz, A; Doé de Maindreville, A; Henry, A; de Labbey, S; Bakchine, S; Ehrlé, N

    2013-03-01

    Patients with amygdala lesions were found to be impaired in recognizing the fear emotion both from face and from music. In patients with Parkinson's disease (PD), impairment in recognition of emotions from facial expressions was reported for disgust, fear, sadness and anger, but no studies had yet investigated this population for the recognition of emotions from both face and music. The ability to recognize basic universal emotions (fear, happiness and sadness) from both face and music was investigated in 24 medicated patients with PD and 24 healthy controls. The patient group was tested for language (verbal fluency tasks), memory (digit and spatial span), executive functions (Similarities and Picture Completion subtests of the WAIS III, Brixton and Stroop tests), visual attention (Bells test), and fulfilled self-assessment tests for anxiety and depression. Results showed that the PD group was significantly impaired for recognition of both fear and sadness emotions from facial expressions, whereas their performance in recognition of emotions from musical excerpts was not different from that of the control group. The scores of fear and sadness recognition from faces were neither correlated to scores in tests for executive and cognitive functions, nor to scores in self-assessment scales. We attributed the observed dissociation to the modality (visual vs. auditory) of presentation and to the ecological value of the musical stimuli that we used. We discuss the relevance of our findings for the care of patients with PD. © 2012 The Author(s) European Journal of Neurology © 2012 EFNS.

  9. Effects of exposure to facial expression variation in face learning and recognition.

    Science.gov (United States)

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.

  10. The effect of comorbid depression on facial and prosody emotion recognition in first-episode schizophrenia spectrum.

    Science.gov (United States)

    Herniman, Sarah E; Allott, Kelly A; Killackey, Eóin; Hester, Robert; Cotton, Sue M

    2017-01-15

    Comorbid depression is common in first-episode schizophrenia spectrum (FES) disorders. Both depression and FES are associated with significant deficits in facial and prosody emotion recognition performance. However, it remains unclear whether people with FES and comorbid depression, compared to those without comorbid depression, have overall poorer emotion recognition, or instead, a different pattern of emotion recognition deficits. The aim of this study was to compare facial and prosody emotion recognition performance between those with and without comorbid depression in FES. This study involved secondary analysis of baseline data from a randomized controlled trial of vocational intervention for young people with first-episode psychosis (N=82; age range: 15-25 years). Those with comorbid depression (n=24) had more accurate recognition of sadness in faces compared to those without comorbid depression. Severity of depressive symptoms was also associated with more accurate recognition of sadness in faces. Such results did not recur for prosody emotion recognition. In addition to the cross-sectional design, limitations of this study include the absence of facial and prosodic recognition of neutral emotions. Findings indicate a mood congruent negative bias in facial emotion recognition in those with comorbid depression and FES, and provide support for cognitive theories of depression that emphasise the role of such biases in the development and maintenance of depression. Longitudinal research is needed to determine whether mood-congruent negative biases are implicated in the development and maintenance of depression in FES, or whether such biases are simply markers of depressed state. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality.

    Science.gov (United States)

    Mehta, Dhwani; Siddiqui, Mohammad Faridul Haque; Javaid, Ahmad Y

    2018-02-01

    Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human-Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam.

  12. When moral identity symbolization motivates prosocial behavior: the role of recognition and moral identity internalization.

    Science.gov (United States)

    Winterich, Karen Page; Aquino, Karl; Mittal, Vikas; Swartz, Richard

    2013-09-01

    This article examines the role of moral identity symbolization in motivating prosocial behaviors. We propose a 3-way interaction of moral identity symbolization, internalization, and recognition to predict prosocial behavior. When moral identity internalization is low, we hypothesize that high moral identity symbolization motivates recognized prosocial behavior due to the opportunity to present one's moral characteristics to others. In contrast, when moral identity internalization is high, prosocial behavior is motivated irrespective of the level of symbolization and recognition. Two studies provide support for this pattern examining volunteering of time. Our results provide a framework for predicting prosocial behavior by combining the 2 dimensions of moral identity with the situational factor of recognition. PsycINFO Database Record (c) 2013 APA, all rights reserved

  13. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    Science.gov (United States)

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  14. Recognition of social identity in ants

    DEFF Research Database (Denmark)

    Bos, Nick; d'Ettorre, Patrizia

    2012-01-01

    Recognizing the identity of others, from the individual to the group level, is a hallmark of society. Ants, and other social insects, have evolved advanced societies characterized by efficient social recognition systems. Colony identity is mediated by colony specific signature mixtures, a blend...

  15. Facial Asymmetry-Based Age Group Estimation: Role in Recognizing Age-Separated Face Images.

    Science.gov (United States)

    Sajid, Muhammad; Taj, Imtiaz Ahmad; Bajwa, Usama Ijaz; Ratyal, Naeem Iqbal

    2018-04-23

    Face recognition aims to establish the identity of a person based on facial characteristics. On the other hand, age group estimation is the automatic calculation of an individual's age range based on facial features. Recognizing age-separated face images is still a challenging research problem due to complex aging processes involving different types of facial tissues, skin, fat, muscles, and bones. Certain holistic and local facial features are used to recognize age-separated face images. However, most of the existing methods recognize face images without incorporating the knowledge learned from age group estimation. In this paper, we propose an age-assisted face recognition approach to handle aging variations. Inspired by the observation that facial asymmetry is an age-dependent intrinsic facial feature, we first use asymmetric facial dimensions to estimate the age group of a given face image. Deeply learned asymmetric facial features are then extracted for face recognition using a deep convolutional neural network (dCNN). Finally, we integrate the knowledge learned from the age group estimation into the face recognition algorithm using the same dCNN. This integration results in a significant improvement in the overall performance compared to using the face recognition algorithm alone. The experimental results on two large facial aging datasets, the MORPH and FERET sets, show that the proposed age group estimation based on the face recognition approach yields superior performance compared to some existing state-of-the-art methods. © 2018 American Academy of Forensic Sciences.

  16. Predictive codes of familiarity and context during the perceptual learning of facial identities

    Science.gov (United States)

    Apps, Matthew A. J.; Tsakiris, Manos

    2013-11-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  17. Facial emotion recognition in Parkinson's disease: A review and new hypotheses

    Science.gov (United States)

    Vérin, Marc; Sauleau, Paul; Grandjean, Didier

    2018-01-01

    Abstract Parkinson's disease is a neurodegenerative disorder classically characterized by motor symptoms. Among them, hypomimia affects facial expressiveness and social communication and has a highly negative impact on patients' and relatives' quality of life. Patients also frequently experience nonmotor symptoms, including emotional‐processing impairments, leading to difficulty in recognizing emotions from faces. Aside from its theoretical importance, understanding the disruption of facial emotion recognition in PD is crucial for improving quality of life for both patients and caregivers, as this impairment is associated with heightened interpersonal difficulties. However, studies assessing abilities in recognizing facial emotions in PD still report contradictory outcomes. The origins of this inconsistency are unclear, and several questions (regarding the role of dopamine replacement therapy or the possible consequences of hypomimia) remain unanswered. We therefore undertook a fresh review of relevant articles focusing on facial emotion recognition in PD to deepen current understanding of this nonmotor feature, exploring multiple significant potential confounding factors, both clinical and methodological, and discussing probable pathophysiological mechanisms. This led us to examine recent proposals about the role of basal ganglia‐based circuits in emotion and to consider the involvement of facial mimicry in this deficit from the perspective of embodied simulation theory. We believe our findings will inform clinical practice and increase fundamental knowledge, particularly in relation to potential embodied emotion impairment in PD. © 2018 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society. PMID:29473661

  18. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yuan Shih

    2010-01-01

    Full Text Available This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA and quadratic discriminant analysis (QDA. It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  19. Social perception and aging: The relationship between aging and the perception of subtle changes in facial happiness and identity.

    Science.gov (United States)

    Yang, Tao; Penton, Tegan; Köybaşı, Şerife Leman; Banissy, Michael J

    2017-09-01

    Previous findings suggest that older adults show impairments in the social perception of faces, including the perception of emotion and facial identity. The majority of this work has tended to examine performance on tasks involving young adult faces and prototypical emotions. While useful, this can influence performance differences between groups due to perceptual biases and limitations on task performance. Here we sought to examine how typical aging is associated with the perception of subtle changes in facial happiness and facial identity in older adult faces. We developed novel tasks that permitted the ability to assess facial happiness, facial identity, and non-social perception (object perception) across similar task parameters. We observe that aging is linked with declines in the ability to make fine-grained judgements in the perception of facial happiness and facial identity (from older adult faces), but not for non-social (object) perception. This pattern of results is discussed in relation to mechanisms that may contribute to declines in facial perceptual processing in older adulthood. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Are there differential deficits in facial emotion recognition between paranoid and non-paranoid schizophrenia? A signal detection analysis.

    Science.gov (United States)

    Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long

    2013-10-30

    This study assessed facial emotion recognition abilities in subjects with paranoid and non-paranoid schizophrenia (NPS) using signal detection theory. We explore the differential deficits in facial emotion recognition in 44 paranoid patients with schizophrenia (PS) and 30 non-paranoid patients with schizophrenia (NPS), compared to 80 healthy controls. We used morphed faces with different intensities of emotion and computed the sensitivity index (d') of each emotion. The results showed that performance differed between the schizophrenia and healthy controls groups in the recognition of both negative and positive affects. The PS group performed worse than the healthy controls group but better than the NPS group in overall performance. Performance differed between the NPS and healthy controls groups in the recognition of all basic emotions and neutral faces; between the PS and healthy controls groups in the recognition of angry faces; and between the PS and NPS groups in the recognition of happiness, anger, sadness, disgust, and neutral affects. The facial emotion recognition impairment in schizophrenia may reflect a generalized deficit rather than a negative-emotion specific deficit. The PS group performed worse than the control group, but better than the NPS group in facial expression recognition, with differential deficits between PS and NPS patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  1. Super-recognition in development: A case study of an adolescent with extraordinary face recognition skills.

    Science.gov (United States)

    Bennetts, Rachel J; Mole, Joseph; Bate, Sarah

    2017-09-01

    Face recognition abilities vary widely. While face recognition deficits have been reported in children, it is unclear whether superior face recognition skills can be encountered during development. This paper presents O.B., a 14-year-old female with extraordinary face recognition skills: a "super-recognizer" (SR). O.B. demonstrated exceptional face-processing skills across multiple tasks, with a level of performance that is comparable to adult SRs. Her superior abilities appear to be specific to face identity: She showed an exaggerated face inversion effect and her superior abilities did not extend to object processing or non-identity aspects of face recognition. Finally, an eye-movement task demonstrated that O.B. spent more time than controls examining the nose - a pattern previously reported in adult SRs. O.B. is therefore particularly skilled at extracting and using identity-specific facial cues, indicating that face and object recognition are dissociable during development, and that super recognition can be detected in adolescence.

  2. Development of Facial Emotion Recognition in Childhood: Age-related Differences in a Shortened Version of the Facial Expression of Emotions - Stimuli and Tests. Data from an ongoing study.

    NARCIS (Netherlands)

    Coenen, Maraike; Aarnoudse, Ceciel; Braams, O.; Veenstra, Wencke S.

    2014-01-01

    OBJECTIVE: Facial emotion recognition is a crucial aspect of social cognition and deficits have been shown to be related to psychiatric disorders in adults and children. However, the development of facial emotion recognition is less clear (Herba & Philips, 2004) and an appropriate instrument to

  3. Recognition of Facial Expressions of Different Emotional Intensities in Patients with Frontotemporal Lobar Degeneration

    Directory of Open Access Journals (Sweden)

    Roy P. C. Kessels

    2007-01-01

    Full Text Available Behavioural problems are a key feature of frontotemporal lobar degeneration (FTLD. Also, FTLD patients show impairments in emotion processing. Specifically, the perception of negative emotional facial expressions is affected. Generally, however, negative emotional expressions are regarded as more difficult to recognize than positive ones, which thus may have been a confounding factor in previous studies. Also, ceiling effects are often present on emotion recognition tasks using full-blown emotional facial expressions. In the present study with FTLD patients, we examined the perception of sadness, anger, fear, happiness, surprise and disgust at different emotional intensities on morphed facial expressions to take task difficulty into account. Results showed that our FTLD patients were specifically impaired at the recognition of the emotion anger. Also, the patients performed worse than the controls on recognition of surprise, but performed at control levels on disgust, happiness, sadness and fear. These findings corroborate and extend previous results showing deficits in emotion perception in FTLD.

  4. Facial Analysis: Looking at Biometric Recognition and Genome-Wide Association

    DEFF Research Database (Denmark)

    Fagertun, Jens

    The goal of this Ph.D. project is to present selected challenges regarding facial analysis within the fields of Human Biometrics and Human Genetics. In the course of the Ph.D. nine papers have been produced, eight of which have been included in this thesis. Three of the papers focus on face...... and gender recognition, where in the gender recognition papers the process of human perception of gender is analyzed and used to improve machine learning algorithms. One paper addresses the issues of variability in human annotation of facial landmarks, which most papers regard as a static “gold standard...... on genetic information, a new area that holds great potential. Two papers explore the connection between minor physical anomalies in the face and schizophrenic disorders. Schizophrenia is a life long disease, but early discovery and treatment can have a significant impact on the course of the disease...

  5. Local intensity area descriptor for facial recognition in ideal and noise conditions

    Science.gov (United States)

    Tran, Chi-Kien; Tseng, Chin-Dar; Chao, Pei-Ju; Ting, Hui-Min; Chang, Liyun; Huang, Yu-Jie; Lee, Tsair-Fwu

    2017-03-01

    We propose a local texture descriptor, local intensity area descriptor (LIAD), which is applied for human facial recognition in ideal and noisy conditions. Each facial image is divided into small regions from which LIAD histograms are extracted and concatenated into a single feature vector to represent the facial image. The recognition is performed using a nearest neighbor classifier with histogram intersection and chi-square statistics as dissimilarity measures. Experiments were conducted with LIAD using the ORL database of faces (Olivetti Research Laboratory, Cambridge), the Face94 face database, the Georgia Tech face database, and the FERET database. The results demonstrated the improvement in accuracy of our proposed descriptor compared to conventional descriptors [local binary pattern (LBP), uniform LBP, local ternary pattern, histogram of oriented gradients, and local directional pattern]. Moreover, the proposed descriptor was less sensitive to noise and had low histogram dimensionality. Thus, it is expected to be a powerful texture descriptor that can be used for various computer vision problems.

  6. The Impact of Sex Differences on Odor Identification and Facial Affect Recognition in Patients with Schizophrenia Spectrum Disorders

    OpenAIRE

    Mossaheb, Nilufar; Kaufmann, Rainer M.; Schlögelhofer, Monika; Aninilkumparambil, Thushara; Himmelbauer, Claudia; Gold, Anna; Zehetmayer, Sonja; Hoffmann, Holger; Traue, Harald C.; Aschauer, Harald

    2018-01-01

    Background Social interactive functions such as facial emotion recognition and smell identification have been shown to differ between women and men. However, little is known about how these differences are mirrored in patients with schizophrenia and how these abilities interact with each other and with other clinical variables in patients vs. healthy controls. Methods Standardized instruments were used to assess facial emotion recognition [Facially Expressed Emotion Labelling (FEEL)] and smel...

  7. The Impact of Sex Differences on Odor Identification and Facial Affect Recognition in Patients with Schizophrenia Spectrum Disorders

    OpenAIRE

    Nilufar Mossaheb; Rainer M. Kaufmann; Monika Schlögelhofer; Thushara Aninilkumparambil; Claudia Himmelbauer; Anna Gold; Sonja Zehetmayer; Holger Hoffmann; Harald C. Traue; Harald Aschauer

    2018-01-01

    BackgroundSocial interactive functions such as facial emotion recognition and smell identification have been shown to differ between women and men. However, little is known about how these differences are mirrored in patients with schizophrenia and how these abilities interact with each other and with other clinical variables in patients vs. healthy controls.MethodsStandardized instruments were used to assess facial emotion recognition [Facially Expressed Emotion Labelling (FEEL)] and smell i...

  8. Relationship between individual differences in functional connectivity and facial-emotion recognition abilities in adults with traumatic brain injury.

    Science.gov (United States)

    Rigon, A; Voss, M W; Turkstra, L S; Mutlu, B; Duff, M C

    2017-01-01

    Although several studies have demonstrated that facial-affect recognition impairment is common following moderate-severe traumatic brain injury (TBI), and that there are diffuse alterations in large-scale functional brain networks in TBI populations, little is known about the relationship between the two. Here, in a sample of 26 participants with TBI and 20 healthy comparison participants (HC) we measured facial-affect recognition abilities and resting-state functional connectivity (rs-FC) using fMRI. We then used network-based statistics to examine (A) the presence of rs-FC differences between individuals with TBI and HC within the facial-affect processing network, and (B) the association between inter-individual differences in emotion recognition skills and rs-FC within the facial-affect processing network. We found that participants with TBI showed significantly lower rs-FC in a component comprising homotopic and within-hemisphere, anterior-posterior connections within the facial-affect processing network. In addition, within the TBI group, participants with higher emotion-labeling skills showed stronger rs-FC within a network comprised of intra- and inter-hemispheric bilateral connections. Findings indicate that the ability to successfully recognize facial-affect after TBI is related to rs-FC within components of facial-affective networks, and provide new evidence that further our understanding of the mechanisms underlying emotion recognition impairment in TBI.

  9. Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality

    Directory of Open Access Journals (Sweden)

    Dhwani Mehta

    2018-02-01

    Full Text Available Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human–Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL is introduced for observing emotion recognition in Augmented Reality (AR. A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam.

  10. Violent video game players and non-players differ on facial emotion recognition.

    Science.gov (United States)

    Diaz, Ruth L; Wong, Ulric; Hodgins, David C; Chiu, Carina G; Goghari, Vina M

    2016-01-01

    Violent video game playing has been associated with both positive and negative effects on cognition. We examined whether playing two or more hours of violent video games a day, compared to not playing video games, was associated with a different pattern of recognition of five facial emotions, while controlling for general perceptual and cognitive differences that might also occur. Undergraduate students were categorized as violent video game players (n = 83) or non-gamers (n = 69) and completed a facial recognition task, consisting of an emotion recognition condition and a control condition of gender recognition. Additionally, participants completed questionnaires assessing their video game and media consumption, aggression, and mood. Violent video game players recognized fearful faces both more accurately and quickly and disgusted faces less accurately than non-gamers. Desensitization to violence, constant exposure to fear and anxiety during game playing, and the habituation to unpleasant stimuli, are possible mechanisms that could explain these results. Future research should evaluate the effects of violent video game playing on emotion processing and social cognition more broadly. © 2015 Wiley Periodicals, Inc.

  11. Task-dependent enhancement of facial expression and identity representations in human cortex.

    Science.gov (United States)

    Dobs, Katharina; Schultz, Johannes; Bülthoff, Isabelle; Gardner, Justin L

    2018-05-15

    What cortical mechanisms allow humans to easily discern the expression or identity of a face? Subjects detected changes in expression or identity of a stream of dynamic faces while we measured BOLD responses from topographically and functionally defined areas throughout the visual hierarchy. Responses in dorsal areas increased during the expression task, whereas responses in ventral areas increased during the identity task, consistent with previous studies. Similar to ventral areas, early visual areas showed increased activity during the identity task. If visual responses are weighted by perceptual mechanisms according to their magnitude, these increased responses would lead to improved attentional selection of the task-appropriate facial aspect. Alternatively, increased responses could be a signature of a sensitivity enhancement mechanism that improves representations of the attended facial aspect. Consistent with the latter sensitivity enhancement mechanism, attending to expression led to enhanced decoding of exemplars of expression both in early visual and dorsal areas relative to attending identity. Similarly, decoding identity exemplars when attending to identity was improved in dorsal and ventral areas. We conclude that attending to expression or identity of dynamic faces is associated with increased selectivity in representations consistent with sensitivity enhancement. Copyright © 2018 The Author(s). Published by Elsevier Inc. All rights reserved.

  12. Is the emotion recognition deficit associated with frontotemporal dementia caused by selective inattention to diagnostic facial features?

    Science.gov (United States)

    Oliver, Lindsay D; Virani, Karim; Finger, Elizabeth C; Mitchell, Derek G V

    2014-07-01

    Frontotemporal dementia (FTD) is a debilitating neurodegenerative disorder characterized by severely impaired social and emotional behaviour, including emotion recognition deficits. Though fear recognition impairments seen in particular neurological and developmental disorders can be ameliorated by reallocating attention to critical facial features, the possibility that similar benefits can be conferred to patients with FTD has yet to be explored. In the current study, we examined the impact of presenting distinct regions of the face (whole face, eyes-only, and eyes-removed) on the ability to recognize expressions of anger, fear, disgust, and happiness in 24 patients with FTD and 24 healthy controls. A recognition deficit was demonstrated across emotions by patients with FTD relative to controls. Crucially, removal of diagnostic facial features resulted in an appropriate decline in performance for both groups; furthermore, patients with FTD demonstrated a lack of disproportionate improvement in emotion recognition accuracy as a result of isolating critical facial features relative to controls. Thus, unlike some neurological and developmental disorders featuring amygdala dysfunction, the emotion recognition deficit observed in FTD is not likely driven by selective inattention to critical facial features. Patients with FTD also mislabelled negative facial expressions as happy more often than controls, providing further evidence for abnormalities in the representation of positive affect in FTD. This work suggests that the emotional expression recognition deficit associated with FTD is unlikely to be rectified by adjusting selective attention to diagnostic features, as has proven useful in other select disorders. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. A dynamic texture based approach to recognition of facial actions and their temporal models

    NARCIS (Netherlands)

    Koelstra, Sander; Pantic, Maja; Patras, Ioannis (Yannis)

    2010-01-01

    In this work, we propose a dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face videos. Two approaches to modeling the

  14. Recognition of social identity in ants

    Directory of Open Access Journals (Sweden)

    Nick eBos

    2012-03-01

    Full Text Available Recognizing the identity of others, from the individual to the group level, is a hallmark of society. Ants, and other social insects, have evolved advanced societies characterized by efficient social recognition systems. Colony identity is mediated by colony specific signature mixtures, a blend of hydrocarbons present on the cuticle of every individual (the label. Recognition occurs when an ant encounters another individual, and compares the label it perceives to an internal representation of its own colony odor (the template. A mismatch between label and template leads to rejection of the encountered individual. Although advances have been made in our understanding of how the label is produced and acquired, contradictory evidence exists about information processing of recognition cues. Here, we review the literature on template acquisition in ants and address how and when the template is formed, where in the nervous system it is localized, and the possible role of learning. We combine seemingly contradictory evidence in to a novel, parsimonious theory for the information processing of nestmate recognition cues.

  15. Abnormal Facial Emotion Recognition in Depression: Serial Testing in an Ultra-Rapid-Cycling Patient.

    Science.gov (United States)

    George, Mark S.; Huggins, Teresa; McDermut, Wilson; Parekh, Priti I.; Rubinow, David; Post, Robert M.

    1998-01-01

    Mood disorder subjects have a selective deficit in recognizing human facial emotion. Whether the facial emotion recognition errors persist during normal mood states (i.e., are state vs. trait dependent) was studied in one male bipolar II patient. Results of five sessions are presented and discussed. (Author/EMK)

  16. Recognition of facial expressions by cortical multi-scale line and edge coding

    OpenAIRE

    Sousa, R.; Rodrigues, J. M. F.; du Buf, J. M. H.

    2010-01-01

    Face-to-face communications between humans involve emotions, which often are unconsciously conveyed by facial expressions and body gestures. Intelligent human-machine interfaces, for example in cognitive robotics, need to recognize emotions. This paper addresses facial expressions and their neural correlates on the basis of a model of the visual cortex: the multi-scale line and edge coding. The recognition model links the cortical representation with Paul Ekman's Action Units which are relate...

  17. Gaze Dynamics in the Recognition of Facial Expressions of Emotion.

    Science.gov (United States)

    Barabanschikov, Vladimir A

    2015-01-01

    We studied preferably fixated parts and features of human face in the process of recognition of facial expressions of emotion. Photographs of facial expressions were used. Participants were to categorize these as basic emotions; during this process, eye movements were registered. It was found that variation in the intensity of an expression is mirrored in accuracy of emotion recognition; it was also reflected by several indices of oculomotor function: duration of inspection of certain areas of the face, its upper and bottom or right parts, right and left sides; location, number and duration of fixations, viewing trajectory. In particular, for low-intensity expressions, right side of the face was found to be attended predominantly (right-side dominance); the right-side dominance effect, was, however, absent for expressions of high intensity. For both low- and high-intensity expressions, upper face part was predominantly fixated, though with greater fixation of high-intensity expressions. The majority of trials (70%), in line with findings in previous studies, revealed a V-shaped pattern of inspection trajectory. No relationship, between accuracy of recognition of emotional expressions, was found, though, with either location and duration of fixations or pattern of gaze directedness in the face. © The Author(s) 2015.

  18. A specific association between facial disgust recognition and estradiol levels in naturally cycling women.

    Directory of Open Access Journals (Sweden)

    Sunjeev K Kamboj

    Full Text Available Subtle changes in social cognition are associated with naturalistic fluctuations in estrogens and progesterone over the course of the menstrual cycle. Using a dynamic emotion recognition task we aimed to provide a comprehensive description of the association between ovarian hormone levels and emotion recognition performance using a variety of performance metrics. Naturally cycling, psychiatrically healthy women attended a single experimental session during a follicular (days 7-13; n = 16, early luteal (days 15-19; n = 14 or late luteal phase (days 22-27; n = 14 of their menstrual cycle. Correct responses and reaction times to dynamic facial expressions were recorded and a two-high threshold analysis was used to assess discrimination and response bias. Salivary progesterone and estradiol were assayed and subjective measures of premenstrual symptoms, anxiety and positive and negative affect assessed. There was no interaction between cycle phase (follicular, early luteal, late luteal and facial expression (sad, happy, fearful, angry, neutral and disgusted on any of the recognition performance metrics. However, across the sample as a whole, progesterone levels were positively correlated with reaction times to a variety of facial expressions (anger, happiness, sadness and neutral expressions. In contrast, estradiol levels were specifically correlated with disgust processing on three performance indices (correct responses, response bias and discrimination. Premenstrual symptoms, anxiety and positive and negative affect were not associated with emotion recognition indices or hormone levels. The study highlights the role of naturalistic variations in ovarian hormone levels in modulating emotion recognition. In particular, progesterone seems to have a general slowing effect on facial expression processing. Our findings also provide the first behavioural evidence of a specific role for estrogens in the processing of disgust in humans.

  19. Unspoken vowel recognition using facial electromyogram.

    Science.gov (United States)

    Arjunan, Sridhar P; Kumar, Dinesh K; Yau, Wai C; Weghorn, Hans

    2006-01-01

    The paper aims to identify speech using the facial muscle activity without the audio signals. The paper presents an effective technique that measures the relative muscle activity of the articulatory muscles. Five English vowels were used as recognition variables. This paper reports using moving root mean square (RMS) of surface electromyogram (SEMG) of four facial muscles to segment the signal and identify the start and end of the utterance. The RMS of the signal between the start and end markers was integrated and normalised. This represented the relative muscle activity of the four muscles. These were classified using back propagation neural network to identify the speech. The technique was successfully used to classify 5 vowels into three classes and was not sensitive to the variation in speed and the style of speaking of the different subjects. The results also show that this technique was suitable for classifying the 5 vowels into 5 classes when trained for each of the subjects. It is suggested that such a technology may be used for the user to give simple unvoiced commands when trained for the specific user.

  20. Comparing Facial Emotional Recognition in Patients with Borderline Personality Disorder and Patients with Schizotypal Personality Disorder with a Normal Group.

    Science.gov (United States)

    Farsham, Aida; Abbaslou, Tahereh; Bidaki, Reza; Bozorg, Bonnie

    2017-04-01

    Objective: No research has been conducted on facial emotional recognition on patients with borderline personality disorder (BPD) and schizotypal personality disorder (SPD). The present study aimed at comparing facial emotion recognition in these patients with the general population. The neurocognitive processing of emotions can show the pathologic style of these 2 disorders. Method: Twenty BPD patients, 16 SPD patients, and 20 healthy individuals were selected by available sampling method. Structural Clinical Interview for Axis II, Millon Personality Inventory, Beck Depression Inventory and Facial Emotional Recognition Test was were conducted for all participants. Discussion: The results of one way ANOVA and Scheffe's post hoc test analysis revealed significant differences in neuropsychology assessment of facial emotional recognition between BPD and SPD patients with normal group (p = 0/001). A significant difference was found in emotion recognition of fear between the 2 groups of BPD and normal population (p = 0/008). A significant difference was observed between SPD patients and control group in emotion recognition of wonder (p = 0/04(. The obtained results indicated a deficit in negative emotion recognition, especially disgust emotion, thus, it can be concluded that these patients have the same neurocognitive profile in the emotion domain.

  1. Reconstructing dynamic mental models of facial expressions in prosopagnosia reveals distinct representations for identity and expression.

    Science.gov (United States)

    Richoz, Anne-Raphaëlle; Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G; Caldara, Roberto

    2015-04-01

    The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits

  2. Learning weighted sparse representation of encoded facial normal information for expression-robust 3D face recognition

    KAUST Repository

    Li, Huibin; Di, Huang; Morvan, Jean-Marie; Chen, Liming

    2011-01-01

    This paper proposes a novel approach for 3D face recognition by learning weighted sparse representation of encoded facial normal information. To comprehensively describe 3D facial surface, three components, in X, Y, and Z-plane respectively

  3. Interactions between facial emotion and identity in face processing: evidence based on redundancy gains.

    Science.gov (United States)

    Yankouskaya, Alla; Booth, David A; Humphreys, Glyn

    2012-11-01

    Interactions between the processing of emotion expression and form-based information from faces (facial identity) were investigated using the redundant-target paradigm, in which we specifically tested whether identity and emotional expression are integrated in a superadditive manner (Miller, Cognitive Psychology 14:247-279, 1982). In Experiments 1 and 2, participants performed emotion and face identity judgments on faces with sad or angry emotional expressions. Responses to redundant targets were faster than responses to either single target when a universal emotion was conveyed, and performance violated the predictions from a model assuming independent processing of emotion and face identity. Experiment 4 showed that these effects were not modulated by varying interstimulus and nontarget contingencies, and Experiment 5 demonstrated that the redundancy gains were eliminated when faces were inverted. Taken together, these results suggest that the identification of emotion and facial identity interact in face processing.

  4. Predicting the Accuracy of Facial Affect Recognition: The Interaction of Child Maltreatment and Intellectual Functioning

    Science.gov (United States)

    Shenk, Chad E.; Putnam, Frank W.; Noll, Jennie G.

    2013-01-01

    Previous research demonstrates that both child maltreatment and intellectual performance contribute uniquely to the accurate identification of facial affect by children and adolescents. The purpose of this study was to extend this research by examining whether child maltreatment affects the accuracy of facial recognition differently at varying…

  5. Facial Recognition of Happiness Is Impaired in Musicians with High Music Performance Anxiety.

    Science.gov (United States)

    Sabino, Alini Daniéli Viana; Camargo, Cristielli M; Chagas, Marcos Hortes N; Osório, Flávia L

    2018-01-01

    Music performance anxiety (MPA) can be defined as a lasting and intense apprehension connected with musical performance in public. Studies suggest that MPA can be regarded as a subtype of social anxiety. Since individuals with social anxiety have deficits in the recognition of facial emotion, we hypothesized that musicians with high levels of MPA would share similar impairments. The aim of this study was to compare parameters of facial emotion recognition (FER) between musicians with high and low MPA. 150 amateur and professional musicians with different musical backgrounds were assessed in respect to their level of MPA and completed a dynamic FER task. The outcomes investigated were accuracy, response time, emotional intensity, and response bias. Musicians with high MPA were less accurate in the recognition of happiness ( p  = 0.04; d  = 0.34), had increased response bias toward fear ( p  = 0.03), and increased response time to facial emotions as a whole ( p  = 0.02; d  = 0.39). Musicians with high MPA displayed FER deficits that were independent of general anxiety levels and possibly of general cognitive capacity. These deficits may favor the maintenance and exacerbation of experiences of anxiety during public performance, since cues of approval, satisfaction, and encouragement are not adequately recognized.

  6. The integration of visual context information in facial emotion recognition in 5- to 15-year-olds.

    Science.gov (United States)

    Theurel, Anne; Witt, Arnaud; Malsert, Jennifer; Lejeune, Fleur; Fiorentini, Chiara; Barisnikov, Koviljka; Gentaz, Edouard

    2016-10-01

    The current study investigated the role of congruent visual context information in the recognition of facial emotional expression in 190 participants from 5 to 15years of age. Children performed a matching task that presented pictures with different facial emotional expressions (anger, disgust, happiness, fear, and sadness) in two conditions: with and without a visual context. The results showed that emotions presented with visual context information were recognized more accurately than those presented in the absence of visual context. The context effect remained steady with age but varied according to the emotion presented and the gender of participants. The findings demonstrated for the first time that children from the age of 5years are able to integrate facial expression and visual context information, and this integration improves facial emotion recognition. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Neuroanatomical correlates of impaired decision-making and facial emotion recognition in early Parkinson's disease.

    Science.gov (United States)

    Ibarretxe-Bilbao, Naroa; Junque, Carme; Tolosa, Eduardo; Marti, Maria-Jose; Valldeoriola, Francesc; Bargallo, Nuria; Zarei, Mojtaba

    2009-09-01

    Decision-making and recognition of emotions are often impaired in patients with Parkinson's disease (PD). The orbitofrontal cortex (OFC) and the amygdala are critical structures subserving these functions. This study was designed to test whether there are any structural changes in these areas that might explain the impairment of decision-making and recognition of facial emotions in early PD. We used the Iowa Gambling Task (IGT) and the Ekman 60 faces test which are sensitive to the integrity of OFC and amygdala dysfunctions in 24 early PD patients and 24 controls. High-resolution structural magnetic resonance images (MRI) were also obtained. Group analysis using voxel-based morphometry (VBM) showed significant and corrected (P decision-making and recognition of facial emotions occurs at the early stages of PD, (ii) these neuropsychological deficits are accompanied by degeneration of OFC and amygdala, and (iii) bilateral OFC reductions are associated with impaired recognition of emotions, and GM volume loss in left lateral OFC is related to decision-making impairment in PD.

  8. Binary pattern flavored feature extractors for Facial Expression Recognition: An overview

    DEFF Research Database (Denmark)

    Kristensen, Rasmus Lyngby; Tan, Zheng-Hua; Ma, Zhanyu

    2015-01-01

    This paper conducts a survey of modern binary pattern flavored feature extractors applied to the Facial Expression Recognition (FER) problem. In total, 26 different feature extractors are included, of which six are selected for in depth description. In addition, the paper unifies important FER...

  9. Impact of Social Cognition on Alcohol Dependence Treatment Outcome: Poorer Facial Emotion Recognition Predicts Relapse/Dropout.

    Science.gov (United States)

    Rupp, Claudia I; Derntl, Birgit; Osthaus, Friederike; Kemmler, Georg; Fleischhacker, W Wolfgang

    2017-12-01

    Despite growing evidence for neurobehavioral deficits in social cognition in alcohol use disorder (AUD), the clinical relevance remains unclear, and little is known about its impact on treatment outcome. This study prospectively investigated the impact of neurocognitive social abilities at treatment onset on treatment completion. Fifty-nine alcohol-dependent patients were assessed with measures of social cognition including 3 core components of empathy via paradigms measuring: (i) emotion recognition (the ability to recognize emotions via facial expression), (ii) emotional perspective taking, and (iii) affective responsiveness at the beginning of inpatient treatment for alcohol dependence. Subjective measures were also obtained, including estimates of task performance and a self-report measure of empathic abilities (Interpersonal Reactivity Index). According to treatment outcomes, patients were divided into a patient group with a regular treatment course (e.g., with planned discharge and without relapse during treatment) or an irregular treatment course (e.g., relapse and/or premature and unplanned termination of treatment, "dropout"). Compared with patients completing treatment in a regular fashion, patients with relapse and/or dropout of treatment had significantly poorer facial emotion recognition ability at treatment onset. Additional logistic regression analyses confirmed these results and identified poor emotion recognition performance as a significant predictor for relapse/dropout. Self-report (subjective) measures did not correspond with neurobehavioral social cognition measures, respectively objective task performance. Analyses of individual subtypes of facial emotions revealed poorer recognition particularly of disgust, anger, and no (neutral faces) emotion in patients with relapse/dropout. Social cognition in AUD is clinically relevant. Less successful treatment outcome was associated with poorer facial emotion recognition ability at the beginning of

  10. Face shape and face identity processing in behavioral variant fronto-temporal dementia: A specific deficit for familiarity and name recognition of famous faces.

    Science.gov (United States)

    De Winter, François-Laurent; Timmers, Dorien; de Gelder, Beatrice; Van Orshoven, Marc; Vieren, Marleen; Bouckaert, Miriam; Cypers, Gert; Caekebeke, Jo; Van de Vliet, Laura; Goffin, Karolien; Van Laere, Koen; Sunaert, Stefan; Vandenberghe, Rik; Vandenbulcke, Mathieu; Van den Stock, Jan

    2016-01-01

    Deficits in face processing have been described in the behavioral variant of fronto-temporal dementia (bvFTD), primarily regarding the recognition of facial expressions. Less is known about face shape and face identity processing. Here we used a hierarchical strategy targeting face shape and face identity recognition in bvFTD and matched healthy controls. Participants performed 3 psychophysical experiments targeting face shape detection (Experiment 1), unfamiliar face identity matching (Experiment 2), familiarity categorization and famous face-name matching (Experiment 3). The results revealed group differences only in Experiment 3, with a deficit in the bvFTD group for both familiarity categorization and famous face-name matching. Voxel-based morphometry regression analyses in the bvFTD group revealed an association between grey matter volume of the left ventral anterior temporal lobe and familiarity recognition, while face-name matching correlated with grey matter volume of the bilateral ventral anterior temporal lobes. Subsequently, we quantified familiarity-specific and name-specific recognition deficits as the sum of the celebrities of which respectively only the name or only the familiarity was accurately recognized. Both indices were associated with grey matter volume of the bilateral anterior temporal cortices. These findings extent previous results by documenting the involvement of the left anterior temporal lobe (ATL) in familiarity detection and the right ATL in name recognition deficits in fronto-temporal lobar degeneration.

  11. Psychopathic traits in adolescents and recognition of emotion in facial expressions

    Directory of Open Access Journals (Sweden)

    Silvio José Lemos Vasconcellos

    2014-12-01

    Full Text Available Recent studies have investigated the ability of adult psychopaths and children with psychopathy traits to identify specific facial expressions of emotion. Conclusive results have not yet been found regarding whether psychopathic traits are associated with a specific deficit in the ability of identifying negative emotions such as fear and sadness. This study compared 20 adolescents with psychopathic traits and 21 adolescents without these traits in terms of their ability to recognize facial expressions of emotion using facial stimuli presented during 200 milliseconds, 500 milliseconds, and 1 second expositions. Analyses indicated significant differences between the two groups' performances only for fear and when displayed for 200 ms. This finding is consistent with findings from other studies in the field and suggests that controlling the duration of exposure to affective stimuli in future studies may help to clarify the mechanisms underlying the facial affect recognition deficits of individuals with psychopathic traits.

  12. Visual Scanning Patterns and Executive Function in Relation to Facial Emotion Recognition in Aging

    Science.gov (United States)

    Circelli, Karishma S.; Clark, Uraina S.; Cronin-Golomb, Alice

    2012-01-01

    Objective The ability to perceive facial emotion varies with age. Relative to younger adults (YA), older adults (OA) are less accurate at identifying fear, anger, and sadness, and more accurate at identifying disgust. Because different emotions are conveyed by different parts of the face, changes in visual scanning patterns may account for age-related variability. We investigated the relation between scanning patterns and recognition of facial emotions. Additionally, as frontal-lobe changes with age may affect scanning patterns and emotion recognition, we examined correlations between scanning parameters and performance on executive function tests. Methods We recorded eye movements from 16 OA (mean age 68.9) and 16 YA (mean age 19.2) while they categorized facial expressions and non-face control images (landscapes), and administered standard tests of executive function. Results OA were less accurate than YA at identifying fear (precognition of sad expressions and with scanning patterns for fearful, sad, and surprised expressions. Conclusion We report significant age-related differences in visual scanning that are specific to faces. The observed relation between scanning patterns and executive function supports the hypothesis that frontal-lobe changes with age may underlie some changes in emotion recognition. PMID:22616800

  13. Elevated responses to constant facial emotions in different faces in the human amygdala: an fMRI study of facial identity and expression

    Directory of Open Access Journals (Sweden)

    Weiller Cornelius

    2004-11-01

    Full Text Available Abstract Background Human faces provide important signals in social interactions by inferring two main types of information, individual identity and emotional expression. The ability to readily assess both, the variability and consistency among emotional expressions in different individuals, is central to one's own interpretation of the imminent environment. A factorial design was used to systematically test the interaction of either constant or variable emotional expressions with constant or variable facial identities in areas involved in face processing using functional magnetic resonance imaging. Results Previous studies suggest a predominant role of the amygdala in the assessment of emotional variability. Here we extend this view by showing that this structure activated to faces with changing identities that display constant emotional expressions. Within this condition, amygdala activation was dependent on the type and intensity of displayed emotion, with significant responses to fearful expressions and, to a lesser extent so to neutral and happy expressions. In contrast, the lateral fusiform gyrus showed a binary pattern of increased activation to changing stimulus features while it was also differentially responsive to the intensity of displayed emotion when processing different facial identities. Conclusions These results suggest that the amygdala might serve to detect constant facial emotions in different individuals, complementing its established role for detecting emotional variability.

  14. Effects of Oxytocin on Facial Expression and Identity Working Memory Are Found in Females but Not Males.

    Science.gov (United States)

    Yue, Tong; Yue, Caizhen; Liu, Guangyuan; Huang, Xiting

    2018-01-01

    Although oxytocin (OXT) has been shown to increase the ability of face perception and processing, no study has explored whether it could improve the performance of working memory for emotional expression information in males and females. Thus, we performed a double-blind, mixed-design, placebo-controlled study to investigate the effects of OXT on temporary maintenance/manipulation of facial information through a facial expression (EMO) vs. identity (ID) working memory task, both for males ( N = 45) and females ( N = 46). Our results showed that in female participants, OXT increased the accuracy of the recognition of faces displaying angry and happy emotions, in the EMO tasks, and also reduced the response time to negative emotional faces, in the ID task. However, the above effects were not present in male subjects. These results indicate that OXT may increase the efficiency of working memory in face processing and this trend is reflected in females rather than in males. This study provides novel evidence for the sexually dimorphic effects of OXT on social cognition.

  15. Effects of Oxytocin on Facial Expression and Identity Working Memory Are Found in Females but Not Males

    Directory of Open Access Journals (Sweden)

    Tong Yue

    2018-04-01

    Full Text Available Although oxytocin (OXT has been shown to increase the ability of face perception and processing, no study has explored whether it could improve the performance of working memory for emotional expression information in males and females. Thus, we performed a double-blind, mixed-design, placebo-controlled study to investigate the effects of OXT on temporary maintenance/manipulation of facial information through a facial expression (EMO vs. identity (ID working memory task, both for males (N = 45 and females (N = 46. Our results showed that in female participants, OXT increased the accuracy of the recognition of faces displaying angry and happy emotions, in the EMO tasks, and also reduced the response time to negative emotional faces, in the ID task. However, the above effects were not present in male subjects. These results indicate that OXT may increase the efficiency of working memory in face processing and this trend is reflected in females rather than in males. This study provides novel evidence for the sexually dimorphic effects of OXT on social cognition.

  16. Static and dynamic 3D facial expression recognition: A comprehensive survey

    NARCIS (Netherlands)

    Sandbach, G.; Zafeiriou, S.; Pantic, Maja; Yin, Lijun

    2012-01-01

    Automatic facial expression recognition constitutes an active research field due to the latest advances in computing technology that make the user's experience a clear priority. The majority of work conducted in this area involves 2D imagery, despite the problems this presents due to inherent pose

  17. Comparing Facial Emotional Recognition in Patients with Borderline Personality Disorder and Patients with Schizotypal Personality Disorder with a Normal Group

    Directory of Open Access Journals (Sweden)

    Aida Farsham

    2017-04-01

    Full Text Available Objective: No research has been conducted on facial emotional recognition on patients with borderline personality disorder (BPD and schizotypal personality disorder (SPD. The present study aimed at comparing facial emotion recognition in these patients with the general population. The neurocognitive processing of emotions can show the pathologic style of these 2 disorders. Method:  Twenty BPD patients, 16 SPD patients, and 20 healthy individuals were selected by available sampling method. Structural Clinical Interview for Axis II, Millon Personality Inventory, Beck Depression Inventory and Facial Emotional Recognition Test was were conducted for all participants.Discussion: The results of one way ANOVA and Scheffe’s post hoc test analysis revealed significant differences in neuropsychology assessment of  facial emotional recognition between BPD and  SPD patients with normal group (p = 0/001. A significant difference was found in emotion recognition of fear between the 2 groups of BPD and normal population (p = 0/008. A significant difference was observed between SPD patients and control group in emotion recognition of wonder (p = 0/04(.The obtained results indicated a deficit in negative emotion recognition, especially disgust emotion, thus, it can be concluded that these patients have the same neurocognitive profile in the emotion domain.

  18. In the face of threat: neural and endocrine correlates of impaired facial emotion recognition in cocaine dependence.

    Science.gov (United States)

    Ersche, K D; Hagan, C C; Smith, D G; Jones, P S; Calder, A J; Williams, G B

    2015-05-26

    The ability to recognize facial expressions of emotion in others is a cornerstone of human interaction. Selective impairments in the recognition of facial expressions of fear have frequently been reported in chronic cocaine users, but the nature of these impairments remains poorly understood. We used the multivariate method of partial least squares and structural magnetic resonance imaging to identify gray matter brain networks that underlie facial affect processing in both cocaine-dependent (n = 29) and healthy male volunteers (n = 29). We hypothesized that disruptions in neuroendocrine function in cocaine-dependent individuals would explain their impairments in fear recognition by modulating the relationship with the underlying gray matter networks. We found that cocaine-dependent individuals not only exhibited significant impairments in the recognition of fear, but also for facial expressions of anger. Although recognition accuracy of threatening expressions co-varied in all participants with distinctive gray matter networks implicated in fear and anger processing, in cocaine users it was less well predicted by these networks than in controls. The weaker brain-behavior relationships for threat processing were also mediated by distinctly different factors. Fear recognition impairments were influenced by variations in intelligence levels, whereas anger recognition impairments were associated with comorbid opiate dependence and related reduction in testosterone levels. We also observed an inverse relationship between testosterone levels and the duration of crack and opiate use. Our data provide novel insight into the neurobiological basis of abnormal threat processing in cocaine dependence, which may shed light on new opportunities facilitating the psychosocial integration of these patients.

  19. Culture/Religion and Identity: Social Justice versus Recognition

    Science.gov (United States)

    Bekerman, Zvi

    2012-01-01

    Recognition is the main word attached to multicultural perspectives. The multicultural call for recognition, the one calling for the recognition of cultural minorities and identities, the one now voiced by liberal states all over and also in Israel was a more difficult one. It took the author some time to realize that calling for the recognition…

  20. Learning weighted sparse representation of encoded facial normal information for expression-robust 3D face recognition

    KAUST Repository

    Li, Huibin

    2011-10-01

    This paper proposes a novel approach for 3D face recognition by learning weighted sparse representation of encoded facial normal information. To comprehensively describe 3D facial surface, three components, in X, Y, and Z-plane respectively, of normal vector are encoded locally to their corresponding normal pattern histograms. They are finally fed to a sparse representation classifier enhanced by learning based spatial weights. Experimental results achieved on the FRGC v2.0 database prove that the proposed encoded normal information is much more discriminative than original normal information. Moreover, the patch based weights learned using the FRGC v1.0 and Bosphorus datasets also demonstrate the importance of each facial physical component for 3D face recognition. © 2011 IEEE.

  1. Quality of life differences in patients with right- versus left-sided facial paralysis: Universal preference of right-sided human face recognition.

    Science.gov (United States)

    Ryu, Nam Gyu; Lim, Byung Woo; Cho, Jae Keun; Kim, Jin

    2016-09-01

    We investigated whether experiencing right- or left-sided facial paralysis would affect an individual's ability to recognize one side of the human face using hybrid hemi-facial photos by preliminary study. Further investigation looked at the relationship between facial recognition ability, stress, and quality of life. To investigate predominance of one side of the human face for face recognition, 100 normal participants (right-handed: n = 97, left-handed: n = 3, right brain dominance: n = 56, left brain dominance: n = 44) answered a questionnaire that included hybrid hemi-facial photos developed to determine decide superiority of one side for human face recognition. To determine differences of stress level and quality of life between individuals experiencing right- and left-sided facial paralysis, 100 patients (right side:50, left side:50, not including traumatic facial nerve paralysis) answered a questionnaire about facial disability index test and quality of life (SF-36 Korean version). Regardless of handedness or hemispheric dominance, the proportion of predominance of the right side in human face recognition was larger than the left side (71% versus 12%, neutral: 17%). Facial distress index of the patients with right-sided facial paralysis was lower than that of left-sided patients (68.8 ± 9.42 versus 76.4 ± 8.28), and the SF-36 scores of right-sided patients were lower than left-sided patients (119.07 ± 15.24 versus 123.25 ± 16.48, total score: 166). Universal preference for the right side in human face recognition showed worse psychological mood and social interaction in patients with right-side facial paralysis than left-sided paralysis. This information is helpful to clinicians in that psychological and social factors should be considered when treating patients with facial-paralysis. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  2. Individual differences in the ability to recognise facial identity are associated with social anxiety.

    Science.gov (United States)

    Davis, Joshua M; McKone, Elinor; Dennett, Hugh; O'Connor, Kirsty B; O'Kearney, Richard; Palermo, Romina

    2011-01-01

    Previous research has been concerned with the relationship between social anxiety and the recognition of face expression but the question of whether there is a relationship between social anxiety and the recognition of face identity has been neglected. Here, we report the first evidence that social anxiety is associated with recognition of face identity, across the population range of individual differences in recognition abilities. Results showed poorer face identity recognition (on the Cambridge Face Memory Test) was correlated with a small but significant increase in social anxiety (Social Interaction Anxiety Scale) but not general anxiety (State-Trait Anxiety Inventory). The correlation was also independent of general visual memory (Cambridge Car Memory Test) and IQ. Theoretically, the correlation could arise because correct identification of people, typically achieved via faces, is important for successful social interactions, extending evidence that individuals with clinical-level deficits in face identity recognition (prosopagnosia) often report social stress due to their inability to recognise others. Equally, the relationship could arise if social anxiety causes reduced exposure or attention to people's faces, and thus to poor development of face recognition mechanisms.

  3. Affective theory of mind inferences contextually influence the recognition of emotional facial expressions.

    Science.gov (United States)

    Stewart, Suzanne L K; Schepman, Astrid; Haigh, Matthew; McHugh, Rhian; Stewart, Andrew J

    2018-03-14

    The recognition of emotional facial expressions is often subject to contextual influence, particularly when the face and the context convey similar emotions. We investigated whether spontaneous, incidental affective theory of mind inferences made while reading vignettes describing social situations would produce context effects on the identification of same-valenced emotions (Experiment 1) as well as differently-valenced emotions (Experiment 2) conveyed by subsequently presented faces. Crucially, we found an effect of context on reaction times in both experiments while, in line with previous work, we found evidence for a context effect on accuracy only in Experiment 1. This demonstrates that affective theory of mind inferences made at the pragmatic level of a text can automatically, contextually influence the perceptual processing of emotional facial expressions in a separate task even when those emotions are of a distinctive valence. Thus, our novel findings suggest that language acts as a contextual influence to the recognition of emotional facial expressions for both same and different valences.

  4. Facial emotion recognition, socio-occupational functioning and expressed emotions in schizophrenia versus bipolar disorder.

    Science.gov (United States)

    Thonse, Umesh; Behere, Rishikesh V; Praharaj, Samir Kumar; Sharma, Podila Sathya Venkata Narasimha

    2018-06-01

    Facial emotion recognition deficits have been consistently demonstrated in patients with severe mental disorders. Expressed emotion is found to be an important predictor of relapse. However, the relationship between facial emotion recognition abilities and expressed emotions and its influence on socio-occupational functioning in schizophrenia versus bipolar disorder has not been studied. In this study we examined 91 patients with schizophrenia and 71 with bipolar disorder for psychopathology, socio occupational functioning and emotion recognition abilities. Primary caregivers of 62 patients with schizophrenia and 49 with bipolar disorder were assessed on Family Attitude Questionnaire to assess their expressed emotions. Patients of schizophrenia and bipolar disorder performed similarly on the emotion recognition task. Patients with schizophrenia group experienced higher critical comments and had a poorer socio-occupational functioning as compared to patients with bipolar disorder. Poorer socio-occupational functioning in patients with schizophrenia was significantly associated with greater dissatisfaction in their caregivers. In patients with bipolar disorder, poorer emotion recognition scores significantly correlated with poorer adaptive living skills and greater hostility and dissatisfaction in their caregivers. The findings of our study suggest that emotion recognition abilities in patients with bipolar disorder are associated with negative expressed emotions leading to problems in adaptive living skills. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Power-Law Radon-Transformed Superimposed Inverse Filter Synthetic Discriminant Correlator for Facial Recognition

    National Research Council Canada - National Science Library

    Haji-saeed, Bahareh; Khoury, Jed; Woods, Charles L; Kierstead, John

    2008-01-01

    ...) for facial recognition is proposed. In order to avoid spectral overlap and nonlinear crosstalk, superposition of rotationally variant sets of inverse filter Fourier-transformed Radon-processed templates is used to generate the SDF...

  6. Face Processing and Facial Emotion Recognition in Adults with Down Syndrome

    Science.gov (United States)

    Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial

    2008-01-01

    Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…

  7. A group of facial normal descriptors for recognizing 3D identical twins

    KAUST Repository

    Li, Huibin

    2012-09-01

    In this paper, to characterize and distinguish identical twins, three popular texture descriptors: i.e. local binary patterns (LBPs), gabor filters (GFs) and local gabor binary patterns (LGBPs) are employed to encode the normal components (x, y and z) of the 3D facial surfaces of identical twins respectively. A group of facial normal descriptors are thus achieved, including Normal Local Binary Patterns descriptor (N-LBPs), Normal Gabor Filters descriptor (N-GFs) and Normal Local Gabor Binary Patterns descriptor (N-LGBPs). All these normal encoding based descriptors are further fed into sparse representation classifier (SRC) for identification. Experimental results on the 3D TEC database demonstrate that these proposed normal encoding based descriptors are very discriminative and efficient, achieving comparable performance to the best of state-of-the-art algorithms. © 2012 IEEE.

  8. 3D facial expression recognition based on histograms of surface differential quantities

    KAUST Repository

    Li, Huibin

    2011-01-01

    3D face models accurately capture facial surfaces, making it possible for precise description of facial activities. In this paper, we present a novel mesh-based method for 3D facial expression recognition using two local shape descriptors. To characterize shape information of the local neighborhood of facial landmarks, we calculate the weighted statistical distributions of surface differential quantities, including histogram of mesh gradient (HoG) and histogram of shape index (HoS). Normal cycle theory based curvature estimation method is employed on 3D face models along with the common cubic fitting curvature estimation method for the purpose of comparison. Based on the basic fact that different expressions involve different local shape deformations, the SVM classifier with both linear and RBF kernels outperforms the state of the art results on the subset of the BU-3DFE database with the same experimental setting. © 2011 Springer-Verlag.

  9. Recognition of Facial Expressions and Prosodic Cues with Graded Emotional Intensities in Adults with Asperger Syndrome

    Science.gov (United States)

    Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-01-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…

  10. Empathy, but not mimicry restriction, influences the recognition of change in emotional facial expressions.

    Science.gov (United States)

    Kosonogov, Vladimir; Titova, Alisa; Vorobyeva, Elena

    2015-01-01

    The current study addressed the hypothesis that empathy and the restriction of facial muscles of observers can influence recognition of emotional facial expressions. A sample of 74 participants recognized the subjective onset of emotional facial expressions (anger, disgust, fear, happiness, sadness, surprise, and neutral) in a series of morphed face photographs showing a gradual change (frame by frame) from one expression to another. The high-empathy (as measured by the Empathy Quotient) participants recognized emotional facial expressions at earlier photographs from the series than did low-empathy ones, but there was no difference in the exploration time. Restriction of facial muscles of observers (with plasters and a stick in mouth) did not influence the responses. We discuss these findings in the context of the embodied simulation theory and previous data on empathy.

  11. A facial expression of pax: Assessing children's "recognition" of emotion from faces.

    Science.gov (United States)

    Nelson, Nicole L; Russell, James A

    2016-01-01

    In a classic study, children were shown an array of facial expressions and asked to choose the person who expressed a specific emotion. Children were later asked to name the emotion in the face with any label they wanted. Subsequent research often relied on the same two tasks--choice from array and free labeling--to support the conclusion that children recognize basic emotions from facial expressions. Here five studies (N=120, 2- to 10-year-olds) showed that these two tasks produce illusory recognition; a novel nonsense facial expression was included in the array. Children "recognized" a nonsense emotion (pax or tolen) and two familiar emotions (fear and jealousy) from the same nonsense face. Children likely used a process of elimination; they paired the unknown facial expression with a label given in the choice-from-array task and, after just two trials, freely labeled the new facial expression with the new label. These data indicate that past studies using this method may have overestimated children's expression knowledge. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Facial-based ethnic recognition: insights from two closely related but ethnically distinct groups

    Directory of Open Access Journals (Sweden)

    S. P. Henzi

    2010-02-01

    Full Text Available Previous studies on facial recognition have considered widely separated populations, both geographically and culturally, making it hard to disentangle effects of familiarity with an ability to identify ethnic groups per se.We used data from a highly intermixed population of African peoples from South Africa to test whether individuals from nine different ethnic groups could correctly differentiate between facial images of two of these, the Tswana and Pedi. Individuals could not assign ethnicity better than expected by chance, and there was no significant difference between genders in accuracy of assignment. Interestingly, we observed a trend that individuals of mixed ethnic origin were better at assigning ethnicity to Pedi and Tswanas, than individuals from less mixed backgrounds. This result supports the hypothesis that ethnic recognition is based on the visual

  13. Joint recognition-expression impairment of facial emotions in Huntington's disease despite intact understanding of feelings.

    Science.gov (United States)

    Trinkler, Iris; Cleret de Langavant, Laurent; Bachoud-Lévi, Anne-Catherine

    2013-02-01

    Patients with Huntington's disease (HD), a neurodegenerative disorder that causes major motor impairments, also show cognitive and emotional deficits. While their deficit in recognising emotions has been explored in depth, little is known about their ability to express emotions and understand their feelings. If these faculties were impaired, patients might not only mis-read emotion expressions in others but their own emotions might be mis-interpreted by others as well, or thirdly, they might have difficulties understanding and describing their feelings. We compared the performance of recognition and expression of facial emotions in 13 HD patients with mild motor impairments but without significant bucco-facial abnormalities, and 13 controls matched for age and education. Emotion recognition was investigated in a forced-choice recognition test (FCR), and emotion expression by filming participants while they mimed the six basic emotional facial expressions (anger, disgust, fear, surprise, sadness and joy) to the experimenter. The films were then segmented into 60 stimuli per participant and four external raters performed a FCR on this material. Further, we tested understanding of feelings in self (alexithymia) and others (empathy) using questionnaires. Both recognition and expression were impaired across different emotions in HD compared to controls and recognition and expression scores were correlated. By contrast, alexithymia and empathy scores were very similar in HD and controls. This might suggest that emotion deficits in HD might be tied to the expression itself. Because similar emotion recognition-expression deficits are also found in Parkinson's Disease and vascular lesions of the striatum, our results further confirm the importance of the striatum for emotion recognition and expression, while access to the meaning of feelings relies on a different brain network, and is spared in HD. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Fearful contextual expression impairs the encoding and recognition of target faces: an ERP study

    Directory of Open Access Journals (Sweden)

    Huiyan eLin

    2015-09-01

    Full Text Available Previous event-related potential (ERP studies have shown that the N170 to faces is modulated by the emotion of the face and its context. However, it is unclear how the encoding of emotional target faces as reflected in the N170 is modulated by the preceding contextual facial expression when temporal onset and identity of target faces are unpredictable. In addition, no study as yet has investigated whether contextual facial expression modulates later recognition of target faces. To address these issues, participants in the present study were asked to identify target faces (fearful or neutral that were presented after a sequence of fearful or neutral contextual faces. The number of sequential contextual faces was random and contextual and target faces were of different identities so that temporal onset and identity of target faces were unpredictable. Electroencephalography (EEG data was recorded during the encoding phase. Subsequently, participants had to perform an unexpected old/new recognition task in which target face identities were presented in either the encoded or the non-encoded expression. ERP data showed a reduced N170 to target faces in fearful as compared to neutral context regardless of target facial expression. In the later recognition phase, recognition rates were reduced for target faces in the encoded expression when they had been encountered in fearful as compared to neutral context. The present findings suggest that fearful compared to neutral contextual faces reduce the allocation of attentional resources towards target faces, which results in limited encoding and recognition of target faces.

  15. Individual differences in the ability to recognise facial identity are associated with social anxiety.

    Directory of Open Access Journals (Sweden)

    Joshua M Davis

    Full Text Available Previous research has been concerned with the relationship between social anxiety and the recognition of face expression but the question of whether there is a relationship between social anxiety and the recognition of face identity has been neglected. Here, we report the first evidence that social anxiety is associated with recognition of face identity, across the population range of individual differences in recognition abilities. Results showed poorer face identity recognition (on the Cambridge Face Memory Test was correlated with a small but significant increase in social anxiety (Social Interaction Anxiety Scale but not general anxiety (State-Trait Anxiety Inventory. The correlation was also independent of general visual memory (Cambridge Car Memory Test and IQ. Theoretically, the correlation could arise because correct identification of people, typically achieved via faces, is important for successful social interactions, extending evidence that individuals with clinical-level deficits in face identity recognition (prosopagnosia often report social stress due to their inability to recognise others. Equally, the relationship could arise if social anxiety causes reduced exposure or attention to people's faces, and thus to poor development of face recognition mechanisms.

  16. Assessing the Utility of a Virtual Environment for Enhancing Facial Affect Recognition in Adolescents with Autism

    Science.gov (United States)

    Bekele, Esubalew; Crittendon, Julie; Zheng, Zhi; Swanson, Amy; Weitlauf, Amy; Warren, Zachary; Sarkar, Nilanjan

    2014-01-01

    Teenagers with autism spectrum disorder (ASD) and age-matched controls participated in a dynamic facial affect recognition task within a virtual reality (VR) environment. Participants identified the emotion of a facial expression displayed at varied levels of intensity by a computer generated avatar. The system assessed performance (i.e.,…

  17. Face-selective regions differ in their ability to classify facial expressions.

    Science.gov (United States)

    Zhang, Hui; Japee, Shruti; Nolan, Rachel; Chu, Carlton; Liu, Ning; Ungerleider, Leslie G

    2016-04-15

    Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they were shown 32 face pictures, portraying four different expressions: neutral, fearful, angry, and happy and belonging to eight different identities. Our results showed that only the amygdala and the posterior superior temporal sulcus (STS) were able to accurately discriminate between these expressions, albeit in different ways: the amygdala discriminated fearful faces from non-fearful faces, whereas STS discriminated neutral from emotional (fearful, angry and happy) faces. In contrast to these findings on the classification of emotional expression, only the fusiform face area (FFA) and anterior inferior temporal cortex (aIT) could discriminate among the various facial identities. Further, the amygdala and STS were better than FFA and aIT at classifying expression, while FFA and aIT were better than the amygdala and STS at classifying identity. Taken together, our findings indicate that the decoding of facial emotion and facial identity occurs in different neural substrates: the amygdala and STS for the former and FFA and aIT for the latter. Published by Elsevier Inc.

  18. Facial and prosodic emotion recognition in social anxiety disorder.

    Science.gov (United States)

    Tseng, Huai-Hsuan; Huang, Yu-Lien; Chen, Jian-Ting; Liang, Kuei-Yu; Lin, Chao-Cheng; Chen, Sue-Huei

    2017-07-01

    Patients with social anxiety disorder (SAD) have a cognitive preference to negatively evaluate emotional information. In particular, the preferential biases in prosodic emotion recognition in SAD have been much less explored. The present study aims to investigate whether SAD patients retain negative evaluation biases across visual and auditory modalities when given sufficient response time to recognise emotions. Thirty-one SAD patients and 31 age- and gender-matched healthy participants completed a culturally suitable non-verbal emotion recognition task and received clinical assessments for social anxiety and depressive symptoms. A repeated measures analysis of variance was conducted to examine group differences in emotion recognition. Compared to healthy participants, SAD patients were significantly less accurate at recognising facial and prosodic emotions, and spent more time on emotion recognition. The differences were mainly driven by the lower accuracy and longer reaction times for recognising fearful emotions in SAD patients. Within the SAD patients, lower accuracy of sad face recognition was associated with higher severity of depressive and social anxiety symptoms, particularly with avoidance symptoms. These findings may represent a cross-modality pattern of avoidance in the later stage of identifying negative emotions in SAD. This pattern may be linked to clinical symptom severity.

  19. Test battery for measuring the perception and recognition of facial expressions of emotion

    Science.gov (United States)

    Wilhelm, Oliver; Hildebrandt, Andrea; Manske, Karsten; Schacht, Annekathrin; Sommer, Werner

    2014-01-01

    Despite the importance of perceiving and recognizing facial expressions in everyday life, there is no comprehensive test battery for the multivariate assessment of these abilities. As a first step toward such a compilation, we present 16 tasks that measure the perception and recognition of facial emotion expressions, and data illustrating each task's difficulty and reliability. The scoring of these tasks focuses on either the speed or accuracy of performance. A sample of 269 healthy young adults completed all tasks. In general, accuracy and reaction time measures for emotion-general scores showed acceptable and high estimates of internal consistency and factor reliability. Emotion-specific scores yielded lower reliabilities, yet high enough to encourage further studies with such measures. Analyses of task difficulty revealed that all tasks are suitable for measuring emotion perception and emotion recognition related abilities in normal populations. PMID:24860528

  20. Anxiety disorders in adolescence are associated with impaired facial expression recognition to negative valence.

    Science.gov (United States)

    Jarros, Rafaela Behs; Salum, Giovanni Abrahão; Belem da Silva, Cristiano Tschiedel; Toazza, Rudineia; de Abreu Costa, Marianna; Fumagalli de Salles, Jerusa; Manfro, Gisele Gus

    2012-02-01

    The aim of the present study was to test the ability of adolescents with a current anxiety diagnosis to recognize facial affective expressions, compared to those without an anxiety disorder. Forty cases and 27 controls were selected from a larger cross sectional community sample of adolescents, aged from 10 to 17 years old. Adolescent's facial recognition of six human emotions (sadness, anger, disgust, happy, surprise and fear) and neutral faces was assessed through a facial labeling test using Ekman's Pictures of Facial Affect (POFA). Adolescents with anxiety disorders had a higher mean number of errors in angry faces as compared to controls: 3.1 (SD=1.13) vs. 2.5 (SD=2.5), OR=1.72 (CI95% 1.02 to 2.89; p=0.040). However, they named neutral faces more accurately than adolescents without anxiety diagnosis: 15% of cases vs. 37.1% of controls presented at least one error in neutral faces, OR=3.46 (CI95% 1.02 to 11.7; p=0.047). No differences were found considering other human emotions or on the distribution of errors in each emotional face between the groups. Our findings support an anxiety-mediated influence on the recognition of facial expressions in adolescence. These difficulty in recognizing angry faces and more accuracy in naming neutral faces may lead to misinterpretation of social clues and can explain some aspects of the impairment in social interactions in adolescents with anxiety disorders. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. The effects of acute alcohol intoxication on the cognitive mechanisms underlying false facial recognition.

    Science.gov (United States)

    Colloff, Melissa F; Flowe, Heather D

    2016-06-01

    False face recognition rates are sometimes higher when faces are learned while under the influence of alcohol. Alcohol myopia theory (AMT) proposes that acute alcohol intoxication during face learning causes people to attend to only the most salient features of a face, impairing the encoding of less salient facial features. Yet, there is currently no direct evidence to support this claim. Our objective was to test whether acute alcohol intoxication impairs face learning by causing subjects to attend to a salient (i.e., distinctive) facial feature over other facial features, as per AMT. We employed a balanced placebo design (N = 100). Subjects in the alcohol group were dosed to achieve a blood alcohol concentration (BAC) of 0.06 %, whereas the no alcohol group consumed tonic water. Alcohol expectancy was controlled. Subjects studied faces with or without a distinctive feature (e.g., scar, piercing). An old-new recognition test followed. Some of the test faces were "old" (i.e., previously studied), and some were "new" (i.e., not previously studied). We varied whether the new test faces had a previously studied distinctive feature versus other familiar characteristics. Intoxicated and sober recognition accuracy was comparable, but subjects in the alcohol group made more positive identifications overall compared to the no alcohol group. The results are not in keeping with AMT. Rather, a more general cognitive mechanism appears to underlie false face recognition in intoxicated subjects. Specifically, acute alcohol intoxication during face learning results in more liberal choosing, perhaps because of an increased reliance on familiarity.

  2. The Moving Window Technique: A Window into Developmental Changes in Attention during Facial Emotion Recognition

    Science.gov (United States)

    Birmingham, Elina; Meixner, Tamara; Iarocci, Grace; Kanan, Christopher; Smilek, Daniel; Tanaka, James W.

    2013-01-01

    The strategies children employ to selectively attend to different parts of the face may reflect important developmental changes in facial emotion recognition. Using the Moving Window Technique (MWT), children aged 5-12 years and adults ("N" = 129) explored faces with a mouse-controlled window in an emotion recognition task. An…

  3. A Modified Sparse Representation Method for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2016-01-01

    Full Text Available In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit method is used to speed up the convergence of OMP (orthogonal matching pursuit. Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan’s JAFFE and CMU’s CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.

  4. A novel dataset for real-life evaluation of facial expression recognition methodologies

    NARCIS (Netherlands)

    Siddiqi, Muhammad Hameed; Ali, Maqbool; Idris, Muhammad; Banos Legran, Oresti; Lee, Sungyoung; Choo, Hyunseung

    2016-01-01

    One limitation seen among most of the previous methods is that they were evaluated under settings that are far from real-life scenarios. The reason is that the existing facial expression recognition (FER) datasets are mostly pose-based and assume a predefined setup. The expressions in these datasets

  5. Active AU Based Patch Weighting for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Weicheng Xie

    2017-01-01

    Full Text Available Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU weighting and patch weight optimization is proposed to represent the specificity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+ databases, respectively. Better cross-database performance has also been observed.

  6. A dynamic texture-based approach to recognition of facial actions and their temporal models.

    Science.gov (United States)

    Koelstra, Sander; Pantic, Maja; Patras, Ioannis

    2010-11-01

    In this work, we propose a dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face videos. Two approaches to modeling the dynamics and the appearance in the face region of an input video are compared: an extended version of Motion History Images and a novel method based on Nonrigid Registration using Free-Form Deformations (FFDs). The extracted motion representation is used to derive motion orientation histogram descriptors in both the spatial and temporal domain. Per AU, a combination of discriminative, frame-based GentleBoost ensemble learners and dynamic, generative Hidden Markov Models detects the presence of the AU in question and its temporal segments in an input image sequence. When tested for recognition of all 27 lower and upper face AUs, occurring alone or in combination in 264 sequences from the MMI facial expression database, the proposed method achieved an average event recognition accuracy of 89.2 percent for the MHI method and 94.3 percent for the FFD method. The generalization performance of the FFD method has been tested using the Cohn-Kanade database. Finally, we also explored the performance on spontaneous expressions in the Sensitive Artificial Listener data set.

  7. The Impact of Sex Differences on Odor Identification and Facial Affect Recognition in Patients with Schizophrenia Spectrum Disorders.

    Science.gov (United States)

    Mossaheb, Nilufar; Kaufmann, Rainer M; Schlögelhofer, Monika; Aninilkumparambil, Thushara; Himmelbauer, Claudia; Gold, Anna; Zehetmayer, Sonja; Hoffmann, Holger; Traue, Harald C; Aschauer, Harald

    2018-01-01

    Social interactive functions such as facial emotion recognition and smell identification have been shown to differ between women and men. However, little is known about how these differences are mirrored in patients with schizophrenia and how these abilities interact with each other and with other clinical variables in patients vs. healthy controls. Standardized instruments were used to assess facial emotion recognition [Facially Expressed Emotion Labelling (FEEL)] and smell identification [University of Pennsylvania Smell Identification Test (UPSIT)] in 51 patients with schizophrenia spectrum disorders and 79 healthy controls; furthermore, working memory functions and clinical variables were assessed. In both the univariate and the multivariate results, illness showed a significant influence on UPSIT and FEEL. The inclusion of age and working memory in the MANOVA resulted in a differential effect with sex and working memory as remaining significant factors. Duration of illness was correlated with both emotion recognition and smell identification in men only, whereas immediate general psychopathology and negative symptoms were associated with emotion recognition only in women. Being affected by schizophrenia spectrum disorder impacts one's ability to correctly recognize facial affects and identify odors. Converging evidence suggests a link between the investigated basic and social cognitive abilities in patients with schizophrenia spectrum disorders with a strong contribution of working memory and differential effects of modulators in women vs. men.

  8. The Impact of Sex Differences on Odor Identification and Facial Affect Recognition in Patients with Schizophrenia Spectrum Disorders

    Directory of Open Access Journals (Sweden)

    Nilufar Mossaheb

    2018-01-01

    Full Text Available BackgroundSocial interactive functions such as facial emotion recognition and smell identification have been shown to differ between women and men. However, little is known about how these differences are mirrored in patients with schizophrenia and how these abilities interact with each other and with other clinical variables in patients vs. healthy controls.MethodsStandardized instruments were used to assess facial emotion recognition [Facially Expressed Emotion Labelling (FEEL] and smell identification [University of Pennsylvania Smell Identification Test (UPSIT] in 51 patients with schizophrenia spectrum disorders and 79 healthy controls; furthermore, working memory functions and clinical variables were assessed.ResultsIn both the univariate and the multivariate results, illness showed a significant influence on UPSIT and FEEL. The inclusion of age and working memory in the MANOVA resulted in a differential effect with sex and working memory as remaining significant factors. Duration of illness was correlated with both emotion recognition and smell identification in men only, whereas immediate general psychopathology and negative symptoms were associated with emotion recognition only in women.ConclusionBeing affected by schizophrenia spectrum disorder impacts one’s ability to correctly recognize facial affects and identify odors. Converging evidence suggests a link between the investigated basic and social cognitive abilities in patients with schizophrenia spectrum disorders with a strong contribution of working memory and differential effects of modulators in women vs. men.

  9. Associations between facial emotion recognition and young adolescents' behaviors in bullying.

    Directory of Open Access Journals (Sweden)

    Tiziana Pozzoli

    Full Text Available This study investigated whether different behaviors young adolescents can act during bullying episodes were associated with their ability to recognize morphed facial expressions of the six basic emotions, expressed at high and low intensity. The sample included 117 middle-school students (45.3% girls; mean age = 12.4 years who filled in a peer nomination questionnaire and individually performed a computerized emotion recognition task. Bayesian generalized mixed-effects models showed a complex picture, in which type and intensity of emotions, students' behavior and gender interacted in explaining recognition accuracy. Results were discussed with a particular focus on negative emotions and suggesting a "neutral" nature of emotion recognition ability, which does not necessarily lead to moral behavior but can also be used for pursuing immoral goals.

  10. Associations between facial emotion recognition and young adolescents’ behaviors in bullying

    Science.gov (United States)

    Gini, Gianluca; Altoè, Gianmarco

    2017-01-01

    This study investigated whether different behaviors young adolescents can act during bullying episodes were associated with their ability to recognize morphed facial expressions of the six basic emotions, expressed at high and low intensity. The sample included 117 middle-school students (45.3% girls; mean age = 12.4 years) who filled in a peer nomination questionnaire and individually performed a computerized emotion recognition task. Bayesian generalized mixed-effects models showed a complex picture, in which type and intensity of emotions, students’ behavior and gender interacted in explaining recognition accuracy. Results were discussed with a particular focus on negative emotions and suggesting a “neutral” nature of emotion recognition ability, which does not necessarily lead to moral behavior but can also be used for pursuing immoral goals. PMID:29131871

  11. Shy Children Are Less Sensitive to Some Cues to Facial Recognition

    Science.gov (United States)

    Brunet, Paul M.; Mondloch, Catherine J.; Schmidt, Louis A.

    2010-01-01

    Temperamental shyness in children is characterized by avoidance of faces and eye contact, beginning in infancy. We conducted two studies to determine whether temperamental shyness was associated with deficits in sensitivity to some cues to facial identity. In Study 1, 40 typically developing 10-year-old children made same/different judgments about…

  12. Misrecognition of facial expressions in delinquents

    Directory of Open Access Journals (Sweden)

    Matsuura Naomi

    2009-09-01

    Full Text Available Abstract Background Previous reports have suggested impairment in facial expression recognition in delinquents, but controversy remains with respect to how such recognition is impaired. To address this issue, we investigated facial expression recognition in delinquents in detail. Methods We tested 24 male adolescent/young adult delinquents incarcerated in correctional facilities. We compared their performances with those of 24 age- and gender-matched control participants. Using standard photographs of facial expressions illustrating six basic emotions, participants matched each emotional facial expression with an appropriate verbal label. Results Delinquents were less accurate in the recognition of facial expressions that conveyed disgust than were control participants. The delinquents misrecognized the facial expressions of disgust as anger more frequently than did controls. Conclusion These results suggest that one of the underpinnings of delinquency might be impaired recognition of emotional facial expressions, with a specific bias toward interpreting disgusted expressions as hostile angry expressions.

  13. Mother's Happiness with Cognitive - Executive Functions and Facial Emotional Recognition in School Children with Down Syndrome.

    Science.gov (United States)

    Malmir, Maryam; Seifenaraghi, Maryam; Farhud, Dariush D; Afrooz, G Ali; Khanahmadi, Mohammad

    2015-05-01

    According to the mother's key roles in bringing up emotional and cognitive abilities of mentally retarded children and respect to positive psychology in recent decades, this research is administered to assess the relation between mother's happiness level with cognitive- executive functions (i.e. attention, working memory, inhibition and planning) and facial emotional recognition ability as two factors in learning and adjustment skills in mentally retarded children with Down syndrome. This study was an applied research and data were analyzed by Pearson correlation procedure. Population is included all school children with Down syndrome (9-12 yr) that come from Tehran, Iran. Overall, 30 children were selected as an in access sample. After selection and agreement of parents, the Wechsler Intelligence Scale for Children-Revised (WISC-R) was performed to determine the student's IQ, and then mothers were invited to fill out the Oxford Happiness Inventory (OHI). Cognitive-executive functions were evaluated by tests as followed: Continues Performance Test (CPT), N-Back, Stroop test (day and night version) and Tower of London. Ekman emotion facial expression test was also accomplished for assessing facial emotional recognition in children with Down syndrome, individually. Mother's happiness level had a positive relation with cognitive-executive functions (attention, working memory, inhibition and planning) and facial emotional recognition in her children with Down syndrome, significantly. Parents' happiness (especially mothers) is a powerful predictor for cognitive and emotional abilities of their children.

  14. Reaction Time of Facial Affect Recognition in Asperger's Disorder for Cartoon and Real, Static and Moving Faces

    Science.gov (United States)

    Miyahara, Motohide; Bray, Anne; Tsujii, Masatsugu; Fujita, Chikako; Sugiyama, Toshiro

    2007-01-01

    This study used a choice reaction-time paradigm to test the perceived impairment of facial affect recognition in Asperger's disorder. Twenty teenagers with Asperger's disorder and 20 controls were compared with respect to the latency and accuracy of response to happy or disgusted facial expressions, presented in cartoon or real images and in…

  15. Analysis of differences between Western and East-Asian faces based on facial region segmentation and PCA for facial expression recognition

    Science.gov (United States)

    Benitez-Garcia, Gibran; Nakamura, Tomoaki; Kaneko, Masahide

    2017-01-01

    Darwin was the first one to assert that facial expressions are innate and universal, which are recognized across all cultures. However, recent some cross-cultural studies have questioned this assumed universality. Therefore, this paper presents an analysis of the differences between Western and East-Asian faces of the six basic expressions (anger, disgust, fear, happiness, sadness and surprise) focused on three individual facial regions of eyes-eyebrows, nose and mouth. The analysis is conducted by applying PCA for two feature extraction methods: appearance-based by using the pixel intensities of facial parts, and geometric-based by handling 125 feature points from the face. Both methods are evaluated using 4 standard databases for both racial groups and the results are compared with a cross-cultural human study applied to 20 participants. Our analysis reveals that differences between Westerns and East-Asians exist mainly on the regions of eyes-eyebrows and mouth for expressions of fear and disgust respectively. This work presents important findings for a better design of automatic facial expression recognition systems based on the difference between two racial groups.

  16. Does Facial Expression Recognition Provide a Toehold for the Development of Emotion Understanding?

    Science.gov (United States)

    Strand, Paul S.; Downs, Andrew; Barbosa-Leiker, Celestina

    2016-01-01

    The authors explored predictions from basic emotion theory (BET) that facial emotion expression recognition skills are insular with respect to their own development, and yet foundational to the development of emotional perspective-taking skills. Participants included 417 preschool children for whom estimates of these 2 emotion understanding…

  17. Facial expression recognition and emotional regulation in narcolepsy with cataplexy.

    Science.gov (United States)

    Bayard, Sophie; Croisier Langenier, Muriel; Dauvilliers, Yves

    2013-04-01

    Cataplexy is pathognomonic of narcolepsy with cataplexy, and defined by a transient loss of muscle tone triggered by strong emotions. Recent researches suggest abnormal amygdala function in narcolepsy with cataplexy. Emotion treatment and emotional regulation strategies are complex functions involving cortical and limbic structures, like the amygdala. As the amygdala has been shown to play a role in facial emotion recognition, we tested the hypothesis that patients with narcolepsy with cataplexy would have impaired recognition of facial emotional expressions compared with patients affected with central hypersomnia without cataplexy and healthy controls. We also aimed to determine whether cataplexy modulates emotional regulation strategies. Emotional intensity, arousal and valence ratings on Ekman faces displaying happiness, surprise, fear, anger, disgust, sadness and neutral expressions of 21 drug-free patients with narcolepsy with cataplexy were compared with 23 drug-free sex-, age- and intellectual level-matched adult patients with hypersomnia without cataplexy and 21 healthy controls. All participants underwent polysomnography recording and multiple sleep latency tests, and completed depression, anxiety and emotional regulation questionnaires. Performance of patients with narcolepsy with cataplexy did not differ from patients with hypersomnia without cataplexy or healthy controls on both intensity rating of each emotion on its prototypical label and mean ratings for valence and arousal. Moreover, patients with narcolepsy with cataplexy did not use different emotional regulation strategies. The level of depressive and anxious symptoms in narcolepsy with cataplexy did not differ from the other groups. Our results demonstrate that narcolepsy with cataplexy accurately perceives and discriminates facial emotions, and regulates emotions normally. The absence of alteration of perceived affective valence remains a major clinical interest in narcolepsy with cataplexy

  18. Residual fMRI sensitivity for identity changes in acquired prosopagnosia.

    Science.gov (United States)

    Fox, Christopher J; Iaria, Giuseppe; Duchaine, Bradley C; Barton, Jason J S

    2013-01-01

    While a network of cortical regions contribute to face processing, the lesions in acquired prosopagnosia are highly variable, and likely result in different combinations of spared and affected regions of this network. To assess the residual functional sensitivities of spared regions in prosopagnosia, we designed a rapid event-related functional magnetic resonance imaging (fMRI) experiment that included pairs of faces with same or different identities and same or different expressions. By measuring the release from adaptation to these facial changes we determined the residual sensitivity of face-selective regions-of-interest. We tested three patients with acquired prosopagnosia, and all three of these patients demonstrated residual sensitivity for facial identity changes in surviving fusiform and occipital face areas of either the right or left hemisphere, but not in the right posterior superior temporal sulcus. The patients also showed some residual capabilities for facial discrimination with normal performance on the Benton Facial Recognition Test, but impaired performance on more complex tasks of facial discrimination. We conclude that fMRI can demonstrate residual processing of facial identity in acquired prosopagnosia, that this adaptation can occur in the same structures that show similar processing in healthy subjects, and further, that this adaptation may be related to behavioral indices of face perception.

  19. Residual fMRI sensitivity for identity changes in acquired prosopagnosia

    Directory of Open Access Journals (Sweden)

    Christopher J Fox

    2013-10-01

    Full Text Available While a network of cortical regions contribute to face processing, the lesions in acquired prosopagnosia are highly variable, and likely result in different combinations of spared and affected regions of this network. To assess the residual functional sensitivities of spared regions in prosopagnosia, we designed a rapid event-related functional magnetic resonance imaging (fMRI experiment that included pairs of faces with same or different identities and same or different expressions. By measuring the release from adaptation to these facial changes we determined the residual sensitivity of face-selective regions-of-interest. We tested three patients with acquired prosopagnosia, and all three of these patients demonstrated residual sensitivity for facial identity changes in surviving fusiform and occipital face areas of either the right or left hemisphere, but not in the right posterior superior temporal sulcus. The patients also showed some residual capabilities for facial discrimination with normal performance on the Benton Facial Recognition Test, but impaired performance on more complex tasks of facial discrimination. We conclude that fMRI can demonstrate residual processing of facial identity in acquired prosopagnosia, that this adaptation can occur in the same structures that show similar processing in healthy subjects, and further, that this adaptation may be related to behavioral indices of face perception.

  20. Sex differences in emotion recognition: Evidence for a small overall female superiority on facial disgust.

    Science.gov (United States)

    Connolly, Hannah L; Lefevre, Carmen E; Young, Andrew W; Lewis, Gary J

    2018-05-21

    Although it is widely believed that females outperform males in the ability to recognize other people's emotions, this conclusion is not well supported by the extant literature. The current study sought to provide a strong test of the female superiority hypothesis by investigating sex differences in emotion recognition for five basic emotions using stimuli well-calibrated for individual differences assessment, across two expressive domains (face and body), and in a large sample (N = 1,022: Study 1). We also assessed the stability and generalizability of our findings with two independent replication samples (N = 303: Study 2, N = 634: Study 3). In Study 1, we observed that females were superior to males in recognizing facial disgust and sadness. In contrast, males were superior to females in recognizing bodily happiness. The female superiority for recognition of facial disgust was replicated in Studies 2 and 3, and this observation also extended to an independent stimulus set in Study 2. No other sex differences were stable across studies. These findings provide evidence for the presence of sex differences in emotion recognition ability, but show that these differences are modest in magnitude and appear to be limited to facial disgust. We discuss whether this sex difference may reflect human evolutionary imperatives concerning reproductive fitness and child care. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. Meta-Analysis of Facial Emotion Recognition in Behavioral Variant Frontotemporal Dementia: Comparison With Alzheimer Disease and Healthy Controls.

    Science.gov (United States)

    Bora, Emre; Velakoulis, Dennis; Walterfang, Mark

    2016-07-01

    Behavioral disturbances and lack of empathy are distinctive clinical features of behavioral variant frontotemporal dementia (bvFTD) in comparison to Alzheimer disease (AD). The aim of this meta-analytic review was to compare facial emotion recognition performances of bvFTD with healthy controls and AD. The current meta-analysis included a total of 19 studies and involved comparisons of 288 individuals with bvFTD and 329 healthy controls and 162 bvFTD and 147 patients with AD. Facial emotion recognition was significantly impaired in bvFTD in comparison to the healthy controls (d = 1.81) and AD (d = 1.23). In bvFTD, recognition of negative emotions, especially anger (d = 1.48) and disgust (d = 1.41), were severely impaired. Emotion recognition was significantly impaired in bvFTD in comparison to AD in all emotions other than happiness. Impairment of emotion recognition is a relatively specific feature of bvFTD. Routine assessment of social-cognitive abilities including emotion recognition can be helpful in better differentiating between cortical dementias such as bvFTD and AD. © The Author(s) 2016.

  2. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia.

    Science.gov (United States)

    Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.

  3. The interaction between embodiment and empathy in facial expression recognition.

    Science.gov (United States)

    Jospe, Karine; Flöel, Agnes; Lavidor, Michal

    2018-02-01

    Previous research has demonstrated that the Action-Observation Network (AON) is involved in both emotional-embodiment (empathy) and action-embodiment mechanisms. In this study, we hypothesized that interfering with the AON will impair action recognition and that this impairment will be modulated by empathy levels. In Experiment 1 (n = 90), participants were asked to recognize facial expressions while their facial motion was restricted. In Experiment 2 (n = 50), we interfered with the AON by applying transcranial Direct Current Stimulation to the motor cortex. In both experiments, we found that interfering with the AON impaired the performance of participants with high empathy levels; however, for the first time, we demonstrated that the interference enhanced the performance of participants with low empathy. This novel finding suggests that the embodiment module may be flexible, and that it can be enhanced in individuals with low empathy by simple manipulation of motor activation.

  4. The Effect of Gender and Age Differences on the Recognition of Emotions from Facial Expressions

    DEFF Research Database (Denmark)

    Schneevogt, Daniela; Paggio, Patrizia

    2016-01-01

    subjects. We conduct an emotion recognition task followed by two stereotype question- naires with different genders and age groups. While recent findings (Krems et al., 2015) suggest that women are biased to see anger in neutral facial expressions posed by females, in our sample both genders assign higher...... ratings of anger to all emotions expressed by females. Furthermore, we demonstrate an effect of gender on the fear-surprise-confusion observed by Tomkins and McCarter (1964); females overpredict fear, while males overpredict surprise.......Recent studies have demonstrated gender and cultural differences in the recognition of emotions in facial expressions. However, most studies were conducted on American subjects. In this pa- per, we explore the generalizability of several findings to a non-American culture in the form of Danish...

  5. Facial Expression Recognition: Can Preschoolers with Cochlear Implants and Hearing Aids Catch It?

    Science.gov (United States)

    Wang, Yifang; Su, Yanjie; Fang, Ping; Zhou, Qingxia

    2011-01-01

    Tager-Flusberg and Sullivan (2000) presented a cognitive model of theory of mind (ToM), in which they thought ToM included two components--a social-perceptual component and a social-cognitive component. Facial expression recognition (FER) is an ability tapping the social-perceptual component. Previous findings suggested that normal hearing…

  6. Unaware person recognition from the body when face identification fails.

    Science.gov (United States)

    Rice, Allyson; Phillips, P Jonathon; Natu, Vaidehi; An, Xiaobo; O'Toole, Alice J

    2013-11-01

    How does one recognize a person when face identification fails? Here, we show that people rely on the body but are unaware of doing so. State-of-the-art face-recognition algorithms were used to select images of people with almost no useful identity information in the face. Recognition of the face alone in these cases was near chance level, but recognition of the person was accurate. Accuracy in identifying the person without the face was identical to that in identifying the whole person. Paradoxically, people reported relying heavily on facial features over noninternal face and body features in making their identity decisions. Eye movements indicated otherwise, with gaze duration and fixations shifting adaptively toward the body and away from the face when the body was a better indicator of identity than the face. This shift occurred with no cost to accuracy or response time. Human identity processing may be partially inaccessible to conscious awareness.

  7. Mother’s Happiness with Cognitive - Executive Functions and Facial Emotional Recognition in School Children with Down Syndrome

    Science.gov (United States)

    MALMIR, Maryam; SEIFENARAGHI, Maryam; FARHUD, Dariush D.; AFROOZ, G.Ali; KHANAHMADI, Mohammad

    2015-01-01

    Background: According to the mother’s key roles in bringing up emotional and cognitive abilities of mentally retarded children and respect to positive psychology in recent decades, this research is administered to assess the relation between mother’s happiness level with cognitive- executive functions (i.e. attention, working memory, inhibition and planning) and facial emotional recognition ability as two factors in learning and adjustment skills in mentally retarded children with Down syndrome. Methods: This study was an applied research and data were analyzed by Pearson correlation procedure. Population is included all school children with Down syndrome (9–12 yr) that come from Tehran, Iran. Overall, 30 children were selected as an in access sample. After selection and agreement of parents, the Wechsler Intelligence Scale for Children-Revised (WISC-R) was performed to determine the student’s IQ, and then mothers were invited to fill out the Oxford Happiness Inventory (OHI). Cognitive-executive functions were evaluated by tests as followed: Continues Performance Test (CPT), N-Back, Stroop test (day and night version) and Tower of London. Ekman emotion facial expression test was also accomplished for assessing facial emotional recognition in children with Down syndrome, individually. Results: Mother’s happiness level had a positive relation with cognitive-executive functions (attention, working memory, inhibition and planning) and facial emotional recognition in her children with Down syndrome, significantly. Conclusion: Parents’ happiness (especially mothers) is a powerful predictor for cognitive and emotional abilities of their children. PMID:26284205

  8. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  9. Children's Recognition of Emotional Facial Expressions Through Photographs and Drawings.

    Science.gov (United States)

    Brechet, Claire

    2017-01-01

    The author's purpose was to examine children's recognition of emotional facial expressions, by comparing two types of stimulus: photographs and drawings. The author aimed to investigate whether drawings could be considered as a more evocative material than photographs, as a function of age and emotion. Five- and 7-year-old children were presented with photographs and drawings displaying facial expressions of 4 basic emotions (i.e., happiness, sadness, anger, and fear) and were asked to perform a matching task by pointing to the face corresponding to the target emotion labeled by the experimenter. The photographs we used were selected from the Radboud Faces Database and the drawings were designed on the basis of both the facial components involved in the expression of these emotions and the graphic cues children tend to use when asked to depict these emotions in their own drawings. Our results show that drawings are better recognized than photographs, for sadness, anger, and fear (with no difference for happiness, due to a ceiling effect). And that the difference between the 2 types of stimuli tends to be more important for 5-year-olds compared to 7-year-olds. These results are discussed in view of their implications, both for future research and for practical application.

  10. Facial Identity and Self-Perception: An Examination of Psychosocial Outcomes in Cosmetic Surgery Patients.

    Science.gov (United States)

    Slavin, Benjamin; Beer, Jacob

    2017-06-01

    The psychosocial health of patients undergoing cosmetic procedures has often been linked to a host of pre-existing conditions, including the type of procedure being performed. Age, gender, and the psychological state of the patients also contribute to the perceived outcome. Specifically, the presence or absence of Body Dysmorphic Disorder (BDD) has been identified as an independent marker for unhappiness following cosmetic procedures.1 However, no study has, to our knowledge, identified a more precise indicator that is associated with higher rates of patient dissatisfaction from cosmetic procedure. This review identifies facial identity and self-perception as potential identifiers of future patient dissatisfaction with cosmetic procedures. Specifically, we believe that patients with a realistic facial identity and self-perception are more likely to be satisfied than those whose self-perceptions are distorted. Patients undergoing restorative procedures, including blepharoplasty, rhytidectomy, and liposuction, are more likely to have an increased outcome favorability rating than those undergoing type change procedures, such as rhinoplasty and breast augmentation. Age, which typically is an independent variable for satisfaction, tends to be associated with increased favorability ratings following cosmetic procedures. Female gender is a second variable associated with higher satisfaction. The authors believe that negative facial identity and self-perception are risk factors for patient dissatisfaction with cosmetic procedural outcomes. Based on this assumption, clinicians may want to focus on the face as a particular area of psychosocial concern. J Drugs Dermatol. 2017;16(6):617-620..

  11. Oxytocin Reduces Face Processing Time but Leaves Recognition Accuracy and Eye-Gaze Unaffected.

    Science.gov (United States)

    Hubble, Kelly; Daughters, Katie; Manstead, Antony S R; Rees, Aled; Thapar, Anita; van Goozen, Stephanie H M

    2017-01-01

    Previous studies have found that oxytocin (OXT) can improve the recognition of emotional facial expressions; it has been proposed that this effect is mediated by an increase in attention to the eye-region of faces. Nevertheless, evidence in support of this claim is inconsistent, and few studies have directly tested the effect of oxytocin on emotion recognition via altered eye-gaze Methods: In a double-blind, within-subjects, randomized control experiment, 40 healthy male participants received 24 IU intranasal OXT and placebo in two identical experimental sessions separated by a 2-week interval. Visual attention to the eye-region was assessed on both occasions while participants completed a static facial emotion recognition task using medium intensity facial expressions. Although OXT had no effect on emotion recognition accuracy, recognition performance was improved because face processing was faster across emotions under the influence of OXT. This effect was marginally significant (pfaces and this was not related to recognition accuracy or face processing time. These findings suggest that OXT-induced enhanced facial emotion recognition is not necessarily mediated by an increase in attention to the eye-region of faces, as previously assumed. We discuss several methodological issues which may explain discrepant findings and suggest the effect of OXT on visual attention may differ depending on task requirements. (JINS, 2017, 23, 23-33).

  12. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.

    Science.gov (United States)

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-06-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. (c) 2015 APA, all rights reserved).

  13. Facial expression recognition and model-based regeneration for distance teaching

    Science.gov (United States)

    De Silva, Liyanage C.; Vinod, V. V.; Sengupta, Kuntal

    1998-12-01

    This paper presents a novel idea of a visual communication system, which can support distance teaching using a network of computers. Here the author's main focus is to enhance the quality of distance teaching by reducing the barrier between the teacher and the student, which is formed due to the remote connection of the networked participants. The paper presents an effective way of improving teacher-student communication link of an IT (Information Technology) based distance teaching scenario, using facial expression recognition results and face global and local motion detection results of both the teacher and the student. It presents a way of regenerating the facial images for the teacher-student down-link, which can enhance the teachers facial expressions and which also can reduce the network traffic compared to usual video broadcasting scenarios. At the same time, it presents a way of representing a large volume of facial expression data of the whole student population (in the student-teacher up-link). This up-link representation helps the teacher to receive an instant feed back of his talk, as if he was delivering a face to face lecture. In conventional video tele-conferencing type of applications, this task is nearly impossible, due to huge volume of upward network traffic. The authors utilize several of their previous publication results for most of the image processing components needs to be investigated to complete such a system. In addition, some of the remaining system components are covered by several on going work.

  14. Cerebro-facio-thoracic dysplasia (Pascual-Castroviejo syndrome): Identification of a novel mutation, use of facial recognition analysis, and review of the literature.

    Science.gov (United States)

    Tender, Jennifer A F; Ferreira, Carlos R

    2018-04-13

    Cerebro-facio-thoracic dysplasia (CFTD) is a rare, autosomal recessive disorder characterized by facial dysmorphism, cognitive impairment and distinct skeletal anomalies and has been linked to the TMCO1 defect syndrome. To describe two siblings with features consistent with CFTD with a novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene. We conducted a literature review and summarized the clinical features and laboratory results of two siblings with a novel pathogenic variant in the TMCO1 gene. Facial recognition analysis was utilized to assess the specificity of facial traits. The novel homozygous p.Arg114* pathogenic variant in the TMCO1 gene is responsible for the clinical features of CFTD in two siblings. Facial recognition analysis allows unambiguous distinction of this syndrome against controls.

  15. Biometric correspondence between reface computerized facial approximations and CT-derived ground truth skin surface models objectively examined using an automated facial recognition system.

    Science.gov (United States)

    Parks, Connie L; Monson, Keith L

    2018-05-01

    This study employed an automated facial recognition system as a means of objectively evaluating biometric correspondence between a ReFace facial approximation and the computed tomography (CT) derived ground truth skin surface of the same individual. High rates of biometric correspondence were observed, irrespective of rank class (R k ) or demographic cohort examined. Overall, 48% of the test subjects' ReFace approximation probes (n=96) were matched to his or her corresponding ground truth skin surface image at R 1 , a rank indicating a high degree of biometric correspondence and a potential positive identification. Identification rates improved with each successively broader rank class (R 10 =85%, R 25 =96%, and R 50 =99%), with 100% identification by R 57 . A sharp increase (39% mean increase) in identification rates was observed between R 1 and R 10 across most rank classes and demographic cohorts. In contrast, significantly lower (p0.05) performance differences were observed across demographic cohorts or CT scan protocols. Performance measures observed in this research suggest that ReFace approximations are biometrically similar to the actual faces of the approximated individuals and, therefore, may have potential operational utility in contexts in which computerized approximations are utilized as probes in automated facial recognition systems. Copyright © 2018. Published by Elsevier B.V.

  16. Psychopathy and facial emotion recognition ability in patients with bipolar affective disorder with or without delinquent behaviors.

    Science.gov (United States)

    Demirel, Husrev; Yesilbas, Dilek; Ozver, Ismail; Yuksek, Erhan; Sahin, Feyzi; Aliustaoglu, Suheyla; Emul, Murat

    2014-04-01

    It is well known that patients with bipolar disorder are more prone to violence and have more criminal behaviors than general population. A strong relationship between criminal behavior and inability to empathize and imperceptions to other person's feelings and facial expressions increases the risk of delinquent behaviors. In this study, we aimed to investigate the deficits of facial emotion recognition ability in euthymic bipolar patients who committed an offense and compare with non-delinquent euthymic patients with bipolar disorder. Fifty-five euthymic patients with delinquent behaviors and 54 non-delinquent euthymic bipolar patients as a control group were included in the study. Ekman's Facial Emotion Recognition Test, sociodemographic data, Hare Psychopathy Checklist, Hamilton Depression Rating Scale and Young Mania Rating Scale were applied to both groups. There were no significant differences between case and control groups in the meaning of average age, gender, level of education, mean age onset of disease and suicide attempt (p>0.05). The three types of most committed delinquent behaviors in patients with euthymic bipolar disorder were as follows: injury (30.8%), threat or insult (20%) and homicide (12.7%). The best accurate percentage of identified facial emotion was "happy" (>99%, for both) while the worst misidentified facial emotion was "fear" in both groups (delinquent behaviors than non-delinquent ones (pdelinquent behaviors. We have shown that patients with bipolar disorder who had delinquent behaviors may have some social interaction problems i.e., misrecognizing fearful and modestly anger facial emotions and need some more time to response facial emotions even in remission. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Facial Affect Recognition Training Through Telepractice: Two Case Studies of Individuals with Chronic Traumatic Brain Injury

    Directory of Open Access Journals (Sweden)

    John Williamson

    2015-07-01

    Full Text Available The use of a modified Facial Affect Recognition (FAR training to identify emotions was investigated with two case studies of adults with moderate to severe chronic (> five years traumatic brain injury (TBI.  The modified FAR training was administered via telepractice to target social communication skills.  Therapy consisted of identifying emotions through static facial expressions, personally reflecting on those emotions, and identifying sarcasm and emotions within social stories and role-play.  Pre- and post-therapy measures included static facial photos to identify emotion and the Prutting and Kirchner Pragmatic Protocol for social communication.  Both participants with chronic TBI showed gains on identifying facial emotions on the static photos.               

  18. Facial Emotion Recognition in Children with High Functioning Autism and Children with Social Phobia

    Science.gov (United States)

    Wong, Nina; Beidel, Deborah C.; Sarver, Dustin E.; Sims, Valerie

    2012-01-01

    Recognizing facial affect is essential for effective social functioning. This study examines emotion recognition abilities in children aged 7-13 years with High Functioning Autism (HFA = 19), Social Phobia (SP = 17), or typical development (TD = 21). Findings indicate that all children identified certain emotions more quickly (e.g., happy [less…

  19. Fingerprint recognition with identical twin fingerprints.

    Science.gov (United States)

    Tao, Xunqiang; Chen, Xinjian; Yang, Xin; Tian, Jie

    2012-01-01

    Fingerprint recognition with identical twins is a challenging task due to the closest genetics-based relationship existing in the identical twins. Several pioneers have analyzed the similarity between twins' fingerprints. In this work we continue to investigate the topic of the similarity of identical twin fingerprints. Our study was tested based on a large identical twin fingerprint database that contains 83 twin pairs, 4 fingers per individual and six impressions per finger: 3984 (83*2*4*6) images. Compared to the previous work, our contributions are summarized as follows: (1) Two state-of-the-art fingerprint identification methods: P071 and VeriFinger 6.1 were used, rather than one fingerprint identification method in previous studies. (2) Six impressions per finger were captured, rather than just one impression, which makes the genuine distribution of matching scores more realistic. (3) A larger sample (83 pairs) was collected. (4) A novel statistical analysis, which aims at showing the probability distribution of the fingerprint types for the corresponding fingers of identical twins which have same fingerprint type, has been conducted. (5) A novel analysis, which aims at showing which finger from identical twins has higher probability of having same fingerprint type, has been conducted. Our results showed that: (a) A state-of-the-art automatic fingerprint verification system can distinguish identical twins without drastic degradation in performance. (b) The chance that the fingerprints have the same type from identical twins is 0.7440, comparing to 0.3215 from non-identical twins. (c) For the corresponding fingers of identical twins which have same fingerprint type, the probability distribution of five major fingerprint types is similar to the probability distribution for all the fingers' fingerprint type. (d) For each of four fingers of identical twins, the probability of having same fingerprint type is similar.

  20. Fingerprint recognition with identical twin fingerprints.

    Directory of Open Access Journals (Sweden)

    Xunqiang Tao

    Full Text Available Fingerprint recognition with identical twins is a challenging task due to the closest genetics-based relationship existing in the identical twins. Several pioneers have analyzed the similarity between twins' fingerprints. In this work we continue to investigate the topic of the similarity of identical twin fingerprints. Our study was tested based on a large identical twin fingerprint database that contains 83 twin pairs, 4 fingers per individual and six impressions per finger: 3984 (83*2*4*6 images. Compared to the previous work, our contributions are summarized as follows: (1 Two state-of-the-art fingerprint identification methods: P071 and VeriFinger 6.1 were used, rather than one fingerprint identification method in previous studies. (2 Six impressions per finger were captured, rather than just one impression, which makes the genuine distribution of matching scores more realistic. (3 A larger sample (83 pairs was collected. (4 A novel statistical analysis, which aims at showing the probability distribution of the fingerprint types for the corresponding fingers of identical twins which have same fingerprint type, has been conducted. (5 A novel analysis, which aims at showing which finger from identical twins has higher probability of having same fingerprint type, has been conducted. Our results showed that: (a A state-of-the-art automatic fingerprint verification system can distinguish identical twins without drastic degradation in performance. (b The chance that the fingerprints have the same type from identical twins is 0.7440, comparing to 0.3215 from non-identical twins. (c For the corresponding fingers of identical twins which have same fingerprint type, the probability distribution of five major fingerprint types is similar to the probability distribution for all the fingers' fingerprint type. (d For each of four fingers of identical twins, the probability of having same fingerprint type is similar.

  1. Can You See Me Now Visualizing Battlefield Facial Recognition Technology in 2035

    Science.gov (United States)

    2010-04-01

    this analogy: Assume that a normal individual, Tom, is very good at identifying different types of fruit juice such as orange juice , apple juice ...either compositing multiple images together to produce a more complete image or by creating a new algorithm to better deal with these problems...captures multiple frames of video and composites them into an appropriately high-resolution image that can be processed by the facial recognition software

  2. Schematic drawings of facial expressions for emotion recognition and interpretation by preschool-aged children.

    Science.gov (United States)

    MacDonald, P M; Kirkpatrick, S W; Sullivan, L A

    1996-11-01

    Schematic drawings of facial expressions were evaluated as a possible assessment tool for research on emotion recognition and interpretation involving young children. A subset of Ekman and Friesen's (1976) Pictures of Facial Affect was used as the standard for comparison. Preschool children (N = 138) were shown drawing and photographs in two context conditions for six emotions (anger, disgust, fear, happiness, sadness, and surprise). The overall correlation between accuracy for the photographs and drawings was .677. A significant difference was found for the stimulus condition (photographs vs. drawings) but not for the administration condition (label-based vs. context-based). Children were significantly more accurate in interpreting drawings than photographs and tended to be more accurate in identifying facial expressions in the label-based administration condition for both photographs and drawings than in the context-based administration condition.

  3. An in-depth cognitive examination of individuals with superior face recognition skills.

    Science.gov (United States)

    Bobak, Anna K; Bennetts, Rachel J; Parris, Benjamin A; Jansari, Ashok; Bate, Sarah

    2016-09-01

    Previous work has reported the existence of "super-recognisers" (SRs), or individuals with extraordinary face recognition skills. However, the precise underpinnings of this ability have not yet been investigated. In this paper we examine (a) the face-specificity of super recognition, (b) perception of facial identity in SRs, (c) whether SRs present with enhancements in holistic processing and (d) the consistency of these findings across different SRs. A detailed neuropsychological investigation into six SRs indicated domain-specificity in three participants, with some evidence of enhanced generalised visuo-cognitive or socio-emotional processes in the remaining individuals. While superior face-processing skills were restricted to face memory in three of the SRs, enhancements to facial identity perception were observed in the others. Notably, five of the six participants showed at least some evidence of enhanced holistic processing. These findings indicate cognitive heterogeneity in the presentation of superior face recognition, and have implications for our theoretical understanding of the typical face-processing system and the identification of superior face-processing skills in applied settings. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Identity recognition in response to different levels of genetic relatedness in commercial soya bean

    Science.gov (United States)

    Van Acker, Rene; Rajcan, Istvan; Swanton, Clarence J.

    2017-01-01

    Identity recognition systems allow plants to tailor competitive phenotypes in response to the genetic relatedness of neighbours. There is limited evidence for the existence of recognition systems in crop species and whether they operate at a level that would allow for identification of different degrees of relatedness. Here, we test the responses of commercial soya bean cultivars to neighbours of varying genetic relatedness consisting of other commercial cultivars (intraspecific), its wild progenitor Glycine soja, and another leguminous species Phaseolus vulgaris (interspecific). We found, for the first time to our knowledge, that a commercial soya bean cultivar, OAC Wallace, showed identity recognition responses to neighbours at different levels of genetic relatedness. OAC Wallace showed no response when grown with other commercial soya bean cultivars (intra-specific neighbours), showed increased allocation to leaves compared with stems with wild soya beans (highly related wild progenitor species), and increased allocation to leaves compared with stems and roots with white beans (interspecific neighbours). Wild soya bean also responded to identity recognition but these responses involved changes in biomass allocation towards stems instead of leaves suggesting that identity recognition responses are species-specific and consistent with the ecology of the species. In conclusion, elucidating identity recognition in crops may provide further knowledge into mechanisms of crop competition and the relationship between crop density and yield. PMID:28280587

  5. Behavioral and Neuroimaging Evidence for Facial Emotion Recognition in Elderly Korean Adults with Mild Cognitive Impairment, Alzheimer's Disease, and Frontotemporal Dementia.

    Science.gov (United States)

    Park, Soowon; Kim, Taehoon; Shin, Seong A; Kim, Yu Kyeong; Sohn, Bo Kyung; Park, Hyeon-Ju; Youn, Jung-Hae; Lee, Jun-Young

    2017-01-01

    Background: Facial emotion recognition (FER) is impaired in individuals with frontotemporal dementia (FTD) and Alzheimer's disease (AD) when compared to healthy older adults. Since deficits in emotion recognition are closely related to caregiver burden or social interactions, researchers have fundamental interest in FER performance in patients with dementia. Purpose: The purpose of this study was to identify the performance profiles of six facial emotions (i.e., fear, anger, disgust, sadness, surprise, and happiness) and neutral faces measured among Korean healthy control (HCs), and those with mild cognitive impairment (MCI), AD, and FTD. Additionally, the neuroanatomical correlates of facial emotions were investigated. Methods: A total of 110 (33 HC, 32 MCI, 32 AD, 13 FTD) older adult participants were recruited from two different medical centers in metropolitan areas of South Korea. These individuals underwent an FER test that was used to assess the recognition of emotions or absence of emotion (neutral) in 35 facial stimuli. Repeated measures two-way analyses of variance were used to examine the distinct profiles of emotional recognition among the four groups. We also performed brain imaging and voxel-based morphometry (VBM) on the participants to examine the associations between FER scores and gray matter volume. Results: The mean score of negative emotion recognition (i.e., fear, anger, disgust, and sadness) clearly discriminated FTD participants from individuals with MCI and AD and HC [ F (3,106) = 10.829, p < 0.001, η 2 = 0.235], whereas the mean score of positive emotion recognition (i.e., surprise and happiness) did not. A VBM analysis showed negative emotions were correlated with gray matter volume of anterior temporal regions, whereas positive emotions were related to gray matter volume of fronto-parietal regions. Conclusion: Impairment of negative FER in patients with FTD is cross-cultural. The discrete neural correlates of FER indicate that emotional

  6. A Smile Enhances 3-Month-Olds' Recognition of an Individual Face

    Science.gov (United States)

    Turati, Chiara; Montirosso, Rosario; Brenna, Viola; Ferrara, Veronica; Borgatti, Renato

    2011-01-01

    Recent studies demonstrated that in adults and children recognition of face identity and facial expression mutually interact (Bate, Haslam, & Hodgson, 2009; Spangler, Schwarzer, Korell, & Maier-Karius, 2010). Here, using a familiarization paradigm, we explored the relation between these processes in early infancy, investigating whether 3-month-old…

  7. Emotional availability, understanding emotions, and recognition of facial emotions in obese mothers with young children.

    Science.gov (United States)

    Bergmann, Sarah; von Klitzing, Kai; Keitel-Korndörfer, Anja; Wendt, Verena; Grube, Matthias; Herpertz, Sarah; Schütz, Astrid; Klein, Annette M

    2016-01-01

    Recent research has identified mother-child relationships of low quality as possible risk factors for childhood obesity. However, it remains open how mothers' own obesity influences the quality of mother-child interaction, and particularly emotional availability (EA). Also unclear is the influence of maternal emotional competencies, i.e. understanding emotions and recognizing facial emotions. This study aimed to (1) investigate differences between obese and normal-weight mothers regarding mother-child EA, maternal understanding emotions and recognition of facial emotions, and (2) explore how maternal emotional competencies and maternal weight interact with each other in predicting EA. A better understanding of these associations could inform strategies of obesity prevention especially in children at risk. We assessed EA, understanding emotions and recognition of facial emotions in 73 obese versus 73 normal-weight mothers, and their children aged 6 to 47 months (Mchild age=24.49, 80 females). Obese mothers showed lower EA and understanding emotions. Mothers' normal weight and their ability to understand emotions were positively associated with EA. The ability to recognize facial emotions was positively associated with EA in obese but not in normal-weight mothers. Maternal weight status indirectly influenced EA through its effect on understanding emotions. Maternal emotional competencies may play an important role for establishing high EA in interaction with the child. Children of obese mothers experience lower EA, which may contribute to overweight development. We suggest including elements that aim to improve maternal emotional competencies and mother-child EA in prevention or intervention programmes targeting childhood obesity. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. The own-age face recognition bias is task dependent.

    Science.gov (United States)

    Proietti, Valentina; Macchi Cassia, Viola; Mondloch, Catherine J

    2015-08-01

    The own-age bias (OAB) in face recognition (more accurate recognition of own-age than other-age faces) is robust among young adults but not older adults. We investigated the OAB under two different task conditions. In Experiment 1 young and older adults (who reported more recent experience with own than other-age faces) completed a match-to-sample task with young and older adult faces; only young adults showed an OAB. In Experiment 2 young and older adults completed an identity detection task in which we manipulated the identity strength of target and distracter identities by morphing each face with an average face in 20% steps. Accuracy increased with identity strength and facial age influenced older adults' (but not younger adults') strategy, but there was no evidence of an OAB. Collectively, these results suggest that the OAB depends on task demands and may be absent when searching for one identity. © 2014 The British Psychological Society.

  9. The perception and identification of facial emotions in individuals with autism spectrum disorders using the Let's Face It! Emotion Skills Battery.

    Science.gov (United States)

    Tanaka, James W; Wolf, Julie M; Klaiman, Cheryl; Koenig, Kathleen; Cockburn, Jeffrey; Herlihy, Lauren; Brown, Carla; Stahl, Sherin S; South, Mikle; McPartland, James C; Kaiser, Martha D; Schultz, Robert T

    2012-12-01

    Although impaired social-emotional ability is a hallmark of autism spectrum disorder (ASD), the perceptual skills and mediating strategies contributing to the social deficits of autism are not well understood. A perceptual skill that is fundamental to effective social communication is the ability to accurately perceive and interpret facial emotions. To evaluate the expression processing of participants with ASD, we designed the Let's Face It! Emotion Skills Battery (LFI! Battery), a computer-based assessment composed of three subscales measuring verbal and perceptual skills implicated in the recognition of facial emotions. We administered the LFI! Battery to groups of participants with ASD and typically developing control (TDC) participants that were matched for age and IQ. On the Name Game labeling task, participants with ASD (N = 68) performed on par with TDC individuals (N = 66) in their ability to name the facial emotions of happy, sad, disgust and surprise and were only impaired in their ability to identify the angry expression. On the Matchmaker Expression task that measures the recognition of facial emotions across different facial identities, the ASD participants (N = 66) performed reliably worse than TDC participants (N = 67) on the emotions of happy, sad, disgust, frighten and angry. In the Parts-Wholes test of perceptual strategies of expression, the TDC participants (N = 67) displayed more holistic encoding for the eyes than the mouths in expressive faces whereas ASD participants (N = 66) exhibited the reverse pattern of holistic recognition for the mouth and analytic recognition of the eyes. In summary, findings from the LFI! Battery show that participants with ASD were able to label the basic facial emotions (with the exception of angry expression) on par with age- and IQ-matched TDC participants. However, participants with ASD were impaired in their ability to generalize facial emotions across different identities and showed a tendency to recognize

  10. Monitoring of facial stress during space flight: Optical computer recognition combining discriminative and generative methods

    Science.gov (United States)

    Dinges, David F.; Venkataraman, Sundara; McGlinchey, Eleanor L.; Metaxas, Dimitris N.

    2007-02-01

    Astronauts are required to perform mission-critical tasks at a high level of functional capability throughout spaceflight. Stressors can compromise their ability to do so, making early objective detection of neurobehavioral problems in spaceflight a priority. Computer optical approaches offer a completely unobtrusive way to detect distress during critical operations in space flight. A methodology was developed and a study completed to determine whether optical computer recognition algorithms could be used to discriminate facial expressions during stress induced by performance demands. Stress recognition from a facial image sequence is a subject that has not received much attention although it is an important problem for many applications beyond space flight (security, human-computer interaction, etc.). This paper proposes a comprehensive method to detect stress from facial image sequences by using a model-based tracker. The image sequences were captured as subjects underwent a battery of psychological tests under high- and low-stress conditions. A cue integration-based tracking system accurately captured the rigid and non-rigid parameters of different parts of the face (eyebrows, lips). The labeled sequences were used to train the recognition system, which consisted of generative (hidden Markov model) and discriminative (support vector machine) parts that yield results superior to using either approach individually. The current optical algorithm methods performed at a 68% accuracy rate in an experimental study of 60 healthy adults undergoing periods of high-stress versus low-stress performance demands. Accuracy and practical feasibility of the technique is being improved further with automatic multi-resolution selection for the discretization of the mask, and automated face detection and mask initialization algorithms.

  11. Behavioral and Neuroimaging Evidence for Facial Emotion Recognition in Elderly Korean Adults with Mild Cognitive Impairment, Alzheimer’s Disease, and Frontotemporal Dementia

    Directory of Open Access Journals (Sweden)

    Soowon Park

    2017-11-01

    Full Text Available Background: Facial emotion recognition (FER is impaired in individuals with frontotemporal dementia (FTD and Alzheimer’s disease (AD when compared to healthy older adults. Since deficits in emotion recognition are closely related to caregiver burden or social interactions, researchers have fundamental interest in FER performance in patients with dementia.Purpose: The purpose of this study was to identify the performance profiles of six facial emotions (i.e., fear, anger, disgust, sadness, surprise, and happiness and neutral faces measured among Korean healthy control (HCs, and those with mild cognitive impairment (MCI, AD, and FTD. Additionally, the neuroanatomical correlates of facial emotions were investigated.Methods: A total of 110 (33 HC, 32 MCI, 32 AD, 13 FTD older adult participants were recruited from two different medical centers in metropolitan areas of South Korea. These individuals underwent an FER test that was used to assess the recognition of emotions or absence of emotion (neutral in 35 facial stimuli. Repeated measures two-way analyses of variance were used to examine the distinct profiles of emotional recognition among the four groups. We also performed brain imaging and voxel-based morphometry (VBM on the participants to examine the associations between FER scores and gray matter volume.Results: The mean score of negative emotion recognition (i.e., fear, anger, disgust, and sadness clearly discriminated FTD participants from individuals with MCI and AD and HC [F(3,106 = 10.829, p < 0.001, η2 = 0.235], whereas the mean score of positive emotion recognition (i.e., surprise and happiness did not. A VBM analysis showed negative emotions were correlated with gray matter volume of anterior temporal regions, whereas positive emotions were related to gray matter volume of fronto-parietal regions.Conclusion: Impairment of negative FER in patients with FTD is cross-cultural. The discrete neural correlates of FER indicate that

  12. Behavioral and Neuroimaging Evidence for Facial Emotion Recognition in Elderly Korean Adults with Mild Cognitive Impairment, Alzheimer’s Disease, and Frontotemporal Dementia

    Science.gov (United States)

    Park, Soowon; Kim, Taehoon; Shin, Seong A; Kim, Yu Kyeong; Sohn, Bo Kyung; Park, Hyeon-Ju; Youn, Jung-Hae; Lee, Jun-Young

    2017-01-01

    Background: Facial emotion recognition (FER) is impaired in individuals with frontotemporal dementia (FTD) and Alzheimer’s disease (AD) when compared to healthy older adults. Since deficits in emotion recognition are closely related to caregiver burden or social interactions, researchers have fundamental interest in FER performance in patients with dementia. Purpose: The purpose of this study was to identify the performance profiles of six facial emotions (i.e., fear, anger, disgust, sadness, surprise, and happiness) and neutral faces measured among Korean healthy control (HCs), and those with mild cognitive impairment (MCI), AD, and FTD. Additionally, the neuroanatomical correlates of facial emotions were investigated. Methods: A total of 110 (33 HC, 32 MCI, 32 AD, 13 FTD) older adult participants were recruited from two different medical centers in metropolitan areas of South Korea. These individuals underwent an FER test that was used to assess the recognition of emotions or absence of emotion (neutral) in 35 facial stimuli. Repeated measures two-way analyses of variance were used to examine the distinct profiles of emotional recognition among the four groups. We also performed brain imaging and voxel-based morphometry (VBM) on the participants to examine the associations between FER scores and gray matter volume. Results: The mean score of negative emotion recognition (i.e., fear, anger, disgust, and sadness) clearly discriminated FTD participants from individuals with MCI and AD and HC [F(3,106) = 10.829, p emotion recognition (i.e., surprise and happiness) did not. A VBM analysis showed negative emotions were correlated with gray matter volume of anterior temporal regions, whereas positive emotions were related to gray matter volume of fronto-parietal regions. Conclusion: Impairment of negative FER in patients with FTD is cross-cultural. The discrete neural correlates of FER indicate that emotional recognition processing is a multi-modal system in the brain

  13. Gender differences in facial emotion recognition in persons with chronic schizophrenia.

    Science.gov (United States)

    Weiss, Elisabeth M; Kohler, Christian G; Brensinger, Colleen M; Bilker, Warren B; Loughead, James; Delazer, Margarete; Nolan, Karen A

    2007-03-01

    The aim of the present study was to investigate possible sex differences in the recognition of facial expressions of emotion and to investigate the pattern of classification errors in schizophrenic males and females. Such an approach provides an opportunity to inspect the degree to which males and females differ in perceiving and interpreting the different emotions displayed to them and to analyze which emotions are most susceptible to recognition errors. Fifty six chronically hospitalized schizophrenic patients (38 men and 18 women) completed the Penn Emotion Recognition Test (ER40), a computerized emotion discrimination test presenting 40 color photographs of evoked happy, sad, anger, fear expressions and neutral expressions balanced for poser gender and ethnicity. We found a significant sex difference in the patterns of error rates in the Penn Emotion Recognition Test. Neutral faces were more commonly mistaken as angry in schizophrenic men, whereas schizophrenic women misinterpreted neutral faces more frequently as sad. Moreover, female faces were better recognized overall, but fear was better recognized in same gender photographs, whereas anger was better recognized in different gender photographs. The findings of the present study lend support to the notion that sex differences in aggressive behavior could be related to a cognitive style characterized by hostile attributions to neutral faces in schizophrenic men.

  14. What is the relationship between the recognition of emotions and core beliefs: Associations between the recognition of emotions in facial expressions and the maladaptive schemas in depressed patients.

    Science.gov (United States)

    Csukly, Gábor; Telek, Rita; Filipovits, Dóra; Takács, Barnabás; Unoka, Zsolt; Simon, Lajos

    2011-03-01

    Depressed patients are both characterized by social reality distorting maladaptive schemas and facial expression recognition impairments. The aim of the present study was to identify specific associations among symptom severity of depression, early maladaptive schemas and recognition patterns of facially expressed emotions. The subjects were inpatients, diagnosed with depression. We used 2 virtual humans for presenting the basic emotions to assess emotion recognition. The Symptom Check List 90 (SCL-90) was used as a self-report measure of psychiatric symptoms and the Beck Depression Inventory (BDI) was applied to assess symptoms of depression. The Young Schema Questionnaire Long Form (YSQ-L) was used to assess the presence of early maladaptive schemas. The recognition rate for happiness showed significant associations with both the BDI and the depression subscale of the SCL-90. After performing the second order factor analysis of the YSQ-L, we found statistically significant associations between the recognition indices of specific emotions and the main factors of the YSQ-L. In this study we found correlations between maladaptive schemas and emotion recognition impairments. While both domains likely contribute to the symptoms of depression, we believe that the results will help us to better understand the social cognitive deficits of depressed patients at the schema level and at the emotion recognition level. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Face puzzle—two new video-based tasks for measuring explicit and implicit aspects of facial emotion recognition

    Science.gov (United States)

    Kliemann, Dorit; Rosenblau, Gabriela; Bölte, Sven; Heekeren, Hauke R.; Dziobek, Isabel

    2013-01-01

    Recognizing others' emotional states is crucial for effective social interaction. While most facial emotion recognition tasks use explicit prompts that trigger consciously controlled processing, emotional faces are almost exclusively processed implicitly in real life. Recent attempts in social cognition suggest a dual process perspective, whereby explicit and implicit processes largely operate independently. However, due to differences in methodology the direct comparison of implicit and explicit social cognition has remained a challenge. Here, we introduce a new tool to comparably measure implicit and explicit processing aspects comprising basic and complex emotions in facial expressions. We developed two video-based tasks with similar answer formats to assess performance in respective facial emotion recognition processes: Face Puzzle, implicit and explicit. To assess the tasks' sensitivity to atypical social cognition and to infer interrelationship patterns between explicit and implicit processes in typical and atypical development, we included healthy adults (NT, n = 24) and adults with autism spectrum disorder (ASD, n = 24). Item analyses yielded good reliability of the new tasks. Group-specific results indicated sensitivity to subtle social impairments in high-functioning ASD. Correlation analyses with established implicit and explicit socio-cognitive measures were further in favor of the tasks' external validity. Between group comparisons provide first hints of differential relations between implicit and explicit aspects of facial emotion recognition processes in healthy compared to ASD participants. In addition, an increased magnitude of between group differences in the implicit task was found for a speed-accuracy composite measure. The new Face Puzzle tool thus provides two new tasks to separately assess explicit and implicit social functioning, for instance, to measure subtle impairments as well as potential improvements due to social cognitive

  16. The face of fear and anger: Facial width-to-height ratio biases recognition of angry and fearful expressions.

    Science.gov (United States)

    Deska, Jason C; Lloyd, E Paige; Hugenberg, Kurt

    2018-04-01

    The ability to rapidly and accurately decode facial expressions is adaptive for human sociality. Although judgments of emotion are primarily determined by musculature, static face structure can also impact emotion judgments. The current work investigates how facial width-to-height ratio (fWHR), a stable feature of all faces, influences perceivers' judgments of expressive displays of anger and fear (Studies 1a, 1b, & 2), and anger and happiness (Study 3). Across 4 studies, we provide evidence consistent with the hypothesis that perceivers more readily see anger on faces with high fWHR compared with those with low fWHR, which instead facilitates the recognition of fear and happiness. This bias emerges when participants are led to believe that targets displaying otherwise neutral faces are attempting to mask an emotion (Studies 1a & 1b), and is evident when faces display an emotion (Studies 2 & 3). Together, these studies suggest that target facial width-to-height ratio biases ascriptions of emotion with consequences for emotion recognition speed and accuracy. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  17. Associations between facial emotion recognition, cognition and alexithymia in patients with schizophrenia: comparison of photographic and virtual reality presentations.

    Science.gov (United States)

    Gutiérrez-Maldonado, J; Rus-Calafell, M; Márquez-Rejón, S; Ribas-Sabaté, J

    2012-01-01

    Emotion recognition is known to be impaired in schizophrenia patients. Although cognitive deficits and symptomatology have been associated with this impairment there are other patient characteristics, such as alexithymia, which have not been widely explored. Emotion recognition is normally assessed by means of photographs, although they do not reproduce the dynamism of human expressions. Our group has designed and validated a virtual reality (VR) task to assess and subsequently train schizophrenia patients. The present study uses this VR task to evaluate the impaired recognition of facial affect in patients with schizophrenia and to examine its association with cognitive deficit and the patients' inability to express feelings. Thirty clinically stabilized outpatients with a well-established diagnosis of schizophrenia or schizoaffective disorder were assessed in neuropsychological, symptomatic and affective domains. They then performed the facial emotion recognition task. Statistical analyses revealed no significant differences between the two presentation conditions (photographs and VR) in terms of overall errors made. However, anger and fear were easier to recognize in VR than in photographs. Moreover, strong correlations were found between psychopathology and the errors made.

  18. The Mysterious Noh Mask: Contribution of Multiple Facial Parts to the Recognition of Emotional Expressions

    Science.gov (United States)

    Miyata, Hiromitsu; Nishimura, Ritsuko; Okanoya, Kazuo; Kawai, Nobuyuki

    2012-01-01

    Background A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. Methodology/Principal Findings In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. Conclusions/Significance The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the traditionally

  19. The telltale face: possible mechanisms behind defector and cooperator recognition revealed by emotional facial expression metrics.

    Science.gov (United States)

    Kovács-Bálint, Zsófia; Bereczkei, Tamás; Hernádi, István

    2013-11-01

    In this study, we investigated the role of facial cues in cooperator and defector recognition. First, a face image database was constructed from pairs of full face portraits of target subjects taken at the moment of decision-making in a prisoner's dilemma game (PDG) and in a preceding neutral task. Image pairs with no deficiencies (n = 67) were standardized for orientation and luminance. Then, confidence in defector and cooperator recognition was tested with image rating in a different group of lay judges (n = 62). Results indicate that (1) defectors were better recognized (58% vs. 47%), (2) they looked different from cooperators (p towards the cooperator category (p < .01), and (4) females were more confident in detecting defectors (p < .05). According to facial microexpression analysis, defection was strongly linked with depressed lower lips and less opened eyes. Significant correlation was found between the intensity of micromimics and the rating of images in the cooperator-defector dimension. In summary, facial expressions can be considered as reliable indicators of momentary social dispositions in the PDG. Females may exhibit an evolutionary-based overestimation bias to detecting social visual cues of the defector face. © 2012 The British Psychological Society.

  20. Recognition of facial emotion and perceived parental bonding styles in healthy volunteers and personality disorder patients.

    Science.gov (United States)

    Zheng, Leilei; Chai, Hao; Chen, Wanzhen; Yu, Rongrong; He, Wei; Jiang, Zhengyan; Yu, Shaohua; Li, Huichun; Wang, Wei

    2011-12-01

    Early parental bonding experiences play a role in emotion recognition and expression in later adulthood, and patients with personality disorder frequently experience inappropriate parental bonding styles, therefore the aim of the present study was to explore whether parental bonding style is correlated with recognition of facial emotion in personality disorder patients. The Parental Bonding Instrument (PBI) and the Matsumoto and Ekman Japanese and Caucasian Facial Expressions of Emotion (JACFEE) photo set tests were carried out in 289 participants. Patients scored lower on parental Care but higher on parental Freedom Control and Autonomy Denial subscales, and they displayed less accuracy when recognizing contempt, disgust and happiness than the healthy volunteers. In healthy volunteers, maternal Autonomy Denial significantly predicted accuracy when recognizing fear, and maternal Care predicted the accuracy of recognizing sadness. In patients, paternal Care negatively predicted the accuracy of recognizing anger, paternal Freedom Control predicted the perceived intensity of contempt, maternal Care predicted the accuracy of recognizing sadness, and the intensity of disgust. Parenting bonding styles have an impact on the decoding process and sensitivity when recognizing facial emotions, especially in personality disorder patients. © 2011 The Authors. Psychiatry and Clinical Neurosciences © 2011 Japanese Society of Psychiatry and Neurology.

  1. Reduced Recognition of Dynamic Facial Emotional Expressions and Emotion-Specific Response Bias in Children with an Autism Spectrum Disorder

    Science.gov (United States)

    Evers, Kris; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2015-01-01

    Emotion labelling was evaluated in two matched samples of 6-14-year old children with and without an autism spectrum disorder (ASD; N = 45 and N = 50, resp.), using six dynamic facial expressions. The Emotion Recognition Task proved to be valuable demonstrating subtle emotion recognition difficulties in ASD, as we showed a general poorer emotion…

  2. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    Science.gov (United States)

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…

  3. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind.

    Science.gov (United States)

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J; Sadato, Norihiro

    2013-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.

  4. Facial expression recognition under partial occlusion based on fusion of global and local features

    Science.gov (United States)

    Wang, Xiaohua; Xia, Chen; Hu, Min; Ren, Fuji

    2018-04-01

    Facial expression recognition under partial occlusion is a challenging research. This paper proposes a novel framework for facial expression recognition under occlusion by fusing the global and local features. In global aspect, first, information entropy are employed to locate the occluded region. Second, principal Component Analysis (PCA) method is adopted to reconstruct the occlusion region of image. After that, a replace strategy is applied to reconstruct image by replacing the occluded region with the corresponding region of the best matched image in training set, Pyramid Weber Local Descriptor (PWLD) feature is then extracted. At last, the outputs of SVM are fitted to the probabilities of the target class by using sigmoid function. For the local aspect, an overlapping block-based method is adopted to extract WLD features, and each block is weighted adaptively by information entropy, Chi-square distance and similar block summation methods are then applied to obtain the probabilities which emotion belongs to. Finally, fusion at the decision level is employed for the data fusion of the global and local features based on Dempster-Shafer theory of evidence. Experimental results on the Cohn-Kanade and JAFFE databases demonstrate the effectiveness and fault tolerance of this method.

  5. RGB-D-T based Face Recognition

    DEFF Research Database (Denmark)

    Nikisins, Olegs; Nasrollahi, Kamal; Greitans, Modris

    2014-01-01

    Facial images are of critical importance in many real-world applications from gaming to surveillance. The current literature on facial image analysis, from face detection to face and facial expression recognition, are mainly performed in either RGB, Depth (D), or both of these modalities. But......, such analyzes have rarely included Thermal (T) modality. This paper paves the way for performing such facial analyzes using synchronized RGB-D-T facial images by introducing a database of 51 persons including facial images of different rotations, illuminations, and expressions. Furthermore, a face recognition...... algorithm has been developed to use these images. The experimental results show that face recognition using such three modalities provides better results compared to face recognition in any of such modalities in most of the cases....

  6. Borrowed beauty? Understanding identity in Asian facial cosmetic surgery.

    Science.gov (United States)

    Aquino, Yves Saint James; Steinkamp, Norbert

    2016-09-01

    This review aims to identify (1) sources of knowledge and (2) important themes of the ethical debate related to surgical alteration of facial features in East Asians. This article integrates narrative and systematic review methods. In March 2014, we searched databases including PubMed, Philosopher's Index, Web of Science, Sociological Abstracts, and Communication Abstracts using key terms "cosmetic surgery," "ethnic*," "ethics," "Asia*," and "Western*." The study included all types of papers written in English that discuss the debate on rhinoplasty and blepharoplasty in East Asians. No limit was put on date of publication. Combining both narrative and systematic review methods, a total of 31 articles were critically appraised on their contribution to ethical reflection founded on the debates regarding the surgical alteration of Asian features. Sources of knowledge were drawn from four main disciplines, including the humanities, medicine or surgery, communications, and economics. Focusing on cosmetic surgery perceived as a westernising practice, the key debate themes included authenticity of identity, interpersonal relationships and socio-economic utility in the context of Asian culture. The study shows how cosmetic surgery of ethnic features plays an important role in understanding female identity in the Asian context. Based on the debate themes authenticity of identity, interpersonal relationships, and socio-economic utility, this article argues that identity should be understood as less individualistic and more as relational and transformational in the Asian context. In addition, this article also proposes to consider cosmetic surgery of Asian features as an interplay of cultural imperialism and cultural nationalism, which can both be a source of social pressure to modify one's appearance.

  7. Deficits in Facial Expression Recognition in Male Adolescents with Early-Onset or Adolescence-Onset Conduct Disorder

    Science.gov (United States)

    Fairchild, Graeme; Van Goozen, Stephanie H. M.; Calder, Andrew J.; Stollery, Sarah J.; Goodyer, Ian M.

    2009-01-01

    Background: We examined whether conduct disorder (CD) is associated with deficits in facial expression recognition and, if so, whether these deficits are specific to the early-onset form of CD, which emerges in childhood. The findings could potentially inform the developmental taxonomic theory of antisocial behaviour, which suggests that…

  8. Externalizing and Internalizing Symptoms Moderate Longitudinal Patterns of Facial Emotion Recognition in Autism Spectrum Disorder

    Science.gov (United States)

    Rosen, Tamara E.; Lerner, Matthew D.

    2016-01-01

    Facial emotion recognition (FER) is thought to be a key deficit domain in autism spectrum disorder (ASD). However, the extant literature is based solely on cross-sectional studies; thus, little is known about even short-term intra-individual dynamics of FER in ASD over time. The present study sought to examine trajectories of FER in ASD youth over…

  9. Tolerance to spatial-relational transformations in unfamiliar faces: A further challenge to a configural processing account of identity recognition.

    Science.gov (United States)

    Lorenzino, Martina; Caminati, Martina; Caudek, Corrado

    2018-05-25

    One of the most important questions in face perception research is to understand what information is extracted from a face in order to recognize its identity. Recognition of facial identity has been attributed to a special sensitivity to "configural" information. However, recent studies have challenged the configural account by showing that participants are poor in discriminating variations of metric distances among facial features, especially for familiar as opposed to unfamiliar faces, whereas a configural account predicts the opposite. We aimed to extend these previous results by examining classes of unfamiliar faces with which we have different levels of expertise. We hypothesized an inverse relation between sensitivity to configural information and expertise with a given class of faces, but only for neutral expressions. By first matching perceptual discriminability, we measured tolerance to subtle configural transformations with same-race (SR) versus other-race (OR) faces, and with upright versus upside-down faces. Consistently with our predictions, we found a lower sensitivity to at-threshold configural changes for SR compared to OR faces. We also found that, for our stimuli, the face inversion effect disappeared for neutral but not for emotional faces - a result that can also be attributed to a lower sensitivity to configural transformations for faces presented in a more familiar orientation. The present findings question a purely configural account of face processing and suggest that the role of spatial-relational information in face processing varies according to the functional demands of the task and to the characteristics of the stimuli. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. A novel hybrid biometric electronic voting system: integrating finger print face recognition

    International Nuclear Information System (INIS)

    Najam, S.S.; Shaikh, A.Z.; Naqvi, S.

    2018-01-01

    A novel hybrid design based electronic voting system is proposed, implemented and analyzed. The proposed system uses two voter verification techniques to give better results in comparison to single identification based systems. Finger print and facial recognition based methods are used for voter identification. Cross verification of a voter during an election process provides better accuracy than single parameter identification method. The facial recognition system uses Viola-Jones algorithm along with rectangular Haar feature selection method for detection and extraction of features to develop a biometric template and for feature extraction during the voting process. Cascaded machine learning based classifiers are used for comparing the features for identity verification using GPCA (Generalized Principle Component Analysis) and K-NN (K-Nearest Neighbor). It is accomplished through comparing the Eigen-vectors of the extracted features with the biometric template pre-stored in the election regulatory body database. The results of the proposed system show that the proposed cascaded design based system performs better than the systems using other classifiers or separate schemes i.e. facial or finger print based schemes. The proposed system will be highly useful for real time applications due to the reason that it has 91% accuracy under nominal light in terms of facial recognition. (author)

  11. Emotion recognition training using composite faces generalises across identities but not all emotions.

    Science.gov (United States)

    Dalili, Michael N; Schofield-Toloza, Lawrence; Munafò, Marcus R; Penton-Voak, Ian S

    2017-08-01

    Many cognitive bias modification (CBM) tasks use facial expressions of emotion as stimuli. Some tasks use unique facial stimuli, while others use composite stimuli, given evidence that emotion is encoded prototypically. However, CBM using composite stimuli may be identity- or emotion-specific, and may not generalise to other stimuli. We investigated the generalisability of effects using composite faces in two experiments. Healthy adults in each study were randomised to one of four training conditions: two stimulus-congruent conditions, where same faces were used during all phases of the task, and two stimulus-incongruent conditions, where faces of the opposite sex (Experiment 1) or faces depicting another emotion (Experiment 2) were used after the modification phase. Our results suggested that training effects generalised across identities. However, our results indicated only partial generalisation across emotions. These findings suggest effects obtained using composite stimuli may extend beyond the stimuli used in the task but remain emotion-specific.

  12. Facial Action Unit Recognition under Incomplete Data Based on Multi-label Learning with Missing Labels

    KAUST Repository

    Li, Yongqiang

    2016-07-07

    Facial action unit (AU) recognition has been applied in a wild range of fields, and has attracted great attention in the past two decades. Most existing works on AU recognition assumed that the complete label assignment for each training image is available, which is often not the case in practice. Labeling AU is expensive and time consuming process. Moreover, due to the AU ambiguity and subjective difference, some AUs are difficult to label reliably and confidently. Many AU recognition works try to train the classifier for each AU independently, which is of high computation cost and ignores the dependency among different AUs. In this work, we formulate AU recognition under incomplete data as a multi-label learning with missing labels (MLML) problem. Most existing MLML methods usually employ the same features for all classes. However, we find this setting is unreasonable in AU recognition, as the occurrence of different AUs produce changes of skin surface displacement or face appearance in different face regions. If using the shared features for all AUs, much noise will be involved due to the occurrence of other AUs. Consequently, the changes of the specific AUs cannot be clearly highlighted, leading to the performance degradation. Instead, we propose to extract the most discriminative features for each AU individually, which are learned by the supervised learning method. The learned features are further embedded into the instance-level label smoothness term of our model, which also includes the label consistency and the class-level label smoothness. Both a global solution using st-cut and an approximated solution using conjugate gradient (CG) descent are provided. Experiments on both posed and spontaneous facial expression databases demonstrate the superiority of the proposed method in comparison with several state-of-the-art works.

  13. Facial Action Unit Recognition under Incomplete Data Based on Multi-label Learning with Missing Labels

    KAUST Repository

    Li, Yongqiang; Wu, Baoyuan; Ghanem, Bernard; Zhao, Yongping; Yao, Hongxun; Ji, Qiang

    2016-01-01

    Facial action unit (AU) recognition has been applied in a wild range of fields, and has attracted great attention in the past two decades. Most existing works on AU recognition assumed that the complete label assignment for each training image is available, which is often not the case in practice. Labeling AU is expensive and time consuming process. Moreover, due to the AU ambiguity and subjective difference, some AUs are difficult to label reliably and confidently. Many AU recognition works try to train the classifier for each AU independently, which is of high computation cost and ignores the dependency among different AUs. In this work, we formulate AU recognition under incomplete data as a multi-label learning with missing labels (MLML) problem. Most existing MLML methods usually employ the same features for all classes. However, we find this setting is unreasonable in AU recognition, as the occurrence of different AUs produce changes of skin surface displacement or face appearance in different face regions. If using the shared features for all AUs, much noise will be involved due to the occurrence of other AUs. Consequently, the changes of the specific AUs cannot be clearly highlighted, leading to the performance degradation. Instead, we propose to extract the most discriminative features for each AU individually, which are learned by the supervised learning method. The learned features are further embedded into the instance-level label smoothness term of our model, which also includes the label consistency and the class-level label smoothness. Both a global solution using st-cut and an approximated solution using conjugate gradient (CG) descent are provided. Experiments on both posed and spontaneous facial expression databases demonstrate the superiority of the proposed method in comparison with several state-of-the-art works.

  14. Differences in Facial Emotion Recognition between First Episode Psychosis, Borderline Personality Disorder and Healthy Controls.

    Directory of Open Access Journals (Sweden)

    Ana Catalan

    Full Text Available Facial emotion recognition (FER is essential to guide social functioning and behaviour for interpersonal communication. FER may be altered in severe mental illness such as in psychosis and in borderline personality disorder patients. However, it is unclear if these FER alterations are specifically related to psychosis. Awareness of FER alterations may be useful in clinical settings to improve treatment strategies. The aim of our study was to examine FER in patients with severe mental disorder and their relation with psychotic symptomatology.Socio-demographic and clinical variables were collected. Alterations on emotion recognition were assessed in 3 groups: patients with first episode psychosis (FEP (n = 64, borderline personality patients (BPD (n = 37 and healthy controls (n = 137, using the Degraded Facial Affect Recognition Task. The Positive and Negative Syndrome Scale, Structured Interview for Schizotypy Revised and Community Assessment of Psychic Experiences scales were used to assess positive psychotic symptoms. WAIS III subtests were used to assess IQ.Kruskal-Wallis analysis showed a significant difference between groups on the FER of neutral faces score between FEP, BPD patients and controls and between FEP patients and controls in angry face recognition. No significant differences were found between groups in the fear or happy conditions. There was a significant difference between groups in the attribution of negative emotion to happy faces. BPD and FEP groups had a much higher tendency to recognize happy faces as negatives. There was no association with the different symptom domains in either group.FEP and BPD patients have problems in recognizing neutral faces more frequently than controls. Moreover, patients tend to over-report negative emotions in recognition of happy faces. Although no relation between psychotic symptoms and FER alterations was found, these deficits could contribute to a patient's misinterpretations in daily life.

  15. Mapping face recognition information use across cultures

    Directory of Open Access Journals (Sweden)

    Sébastien eMiellet

    2013-02-01

    Full Text Available Face recognition is not rooted in a universal eye movement information-gathering strategy. Western observers favor a local facial feature sampling strategy, whereas Eastern observers prefer sampling face information from a global, central fixation strategy. Yet, the precise qualitative (the diagnostic and quantitative (the amount information underlying these cultural perceptual biases in face recognition remains undetermined.To this end, we monitored the eye movements of Western and Eastern observers during a face recognition task, with a novel gaze-contingent technique: the Expanding Spotlight. We used 2° Gaussian apertures centered on the observers' fixations expanding dynamically at a rate of 1° every 25ms at each fixation - the longer the fixation duration, the larger the aperture size. Identity-specific face information was only displayed within the Gaussian aperture; outside the aperture, an average face template was displayed to facilitate saccade planning. Thus, the Expanding Spotlight simultaneously maps out the facial information span at each fixation location.Data obtained with the Expanding Spotlight technique confirmed that Westerners extract more information from the eye region, whereas Easterners extract more information from the nose region. Interestingly, this quantitative difference was paired with a qualitative disparity. Retinal filters based on spatial frequency decomposition built from the fixations maps revealed that Westerners used local high-spatial frequency information sampling, covering all the features critical for effective face recognition (the eyes and the mouth. In contrast, Easterners achieved a similar result by using global low-spatial frequency information from those facial features.Our data show that the face system flexibly engages into local or global eye movement strategies across cultures, by relying on distinct facial information span and culturally tuned spatially filtered information. Overall, our

  16. Facial emotion recognition deficits following moderate-severe Traumatic Brain Injury (TBI): re-examining the valence effect and the role of emotion intensity.

    Science.gov (United States)

    Rosenberg, Hannah; McDonald, Skye; Dethier, Marie; Kessels, Roy P C; Westbrook, R Frederick

    2014-11-01

    Many individuals who sustain moderate-severe traumatic brain injuries (TBI) are poor at recognizing emotional expressions, with a greater impairment in recognizing negative (e.g., fear, disgust, sadness, and anger) than positive emotions (e.g., happiness and surprise). It has been questioned whether this "valence effect" might be an artifact of the wide use of static facial emotion stimuli (usually full-blown expressions) which differ in difficulty rather than a real consequence of brain impairment. This study aimed to investigate the valence effect in TBI, while examining emotion recognition across different intensities (low, medium, and high). Twenty-seven individuals with TBI and 28 matched control participants were tested on the Emotion Recognition Task (ERT). The TBI group was more impaired in overall emotion recognition, and less accurate recognizing negative emotions. However, examining the performance across the different intensities indicated that this difference was driven by some emotions (e.g., happiness) being much easier to recognize than others (e.g., fear and surprise). Our findings indicate that individuals with TBI have an overall deficit in facial emotion recognition, and that both people with TBI and control participants found some emotions more difficult than others. These results suggest that conventional measures of facial affect recognition that do not examine variance in the difficulty of emotions may produce erroneous conclusions about differential impairment. They also cast doubt on the notion that dissociable neural pathways underlie the recognition of positive and negative emotions, which are differentially affected by TBI and potentially other neurological or psychiatric disorders.

  17. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind

    Science.gov (United States)

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T.; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J.; Sadato, Norihiro

    2012-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience. PMID:23372547

  18. Exploring Children's Face-Space: A Multidimensional Scaling Analysis of the Mental Representation of Facial Identity

    Science.gov (United States)

    Nishimura, Mayu; Maurer, Daphne; Gao, Xiaoqing

    2009-01-01

    We explored differences in the mental representation of facial identity between 8-year-olds and adults. The 8-year-olds and adults made similarity judgments of a homogeneous set of faces (individual hair cues removed) using an "odd-man-out" paradigm. Multidimensional scaling (MDS) analyses were performed to represent perceived similarity of faces…

  19. Delineation of facial archetypes by 3d averaging.

    Science.gov (United States)

    Shaweesh, Ashraf I; Thomas, C David L; Bankier, Agnes; Clement, John G

    2004-10-01

    The objective of this study was to investigate the feasibility of creating archetypal 3D faces through computerized 3D facial averaging. A 3D surface scanner Fiore and its software were used to acquire the 3D scans of the faces while 3D Rugle3 and locally-developed software generated the holistic facial averages. 3D facial averages were created from two ethnic groups; European and Japanese and from children with three previous genetic disorders; Williams syndrome, achondroplasia and Sotos syndrome as well as the normal control group. The method included averaging the corresponding depth (z) coordinates of the 3D facial scans. Compared with other face averaging techniques there was not any warping or filling in the spaces by interpolation; however, this facial average lacked colour information. The results showed that as few as 14 faces were sufficient to create an archetypal facial average. In turn this would make it practical to use face averaging as an identification tool in cases where it would be difficult to recruit a larger number of participants. In generating the average, correcting for size differences among faces was shown to adjust the average outlines of the facial features. It is assumed that 3D facial averaging would help in the identification of the ethnic status of persons whose identity may not be known with certainty. In clinical medicine, it would have a great potential for the diagnosis of syndromes with distinctive facial features. The system would also assist in the education of clinicians in the recognition and identification of such syndromes.

  20. Improved RGB-D-T based Face Recognition

    DEFF Research Database (Denmark)

    Oliu Simon, Marc; Corneanu, Ciprian; Nasrollahi, Kamal

    2016-01-01

    years. At the same time a multimodal facial recognition is a promising approach. This paper combines the latest successes in both directions by applying deep learning Convolutional Neural Networks (CNN) to the multimodal RGB-D-T based facial recognition problem outperforming previously published results......Reliable facial recognition systems are of crucial importance in various applications from entertainment to security. Thanks to the deep-learning concepts introduced in the field, a significant improvement in the performance of the unimodal facial recognition systems has been observed in the recent...

  1. A Novel Hybrid Biometric Electronic Voting System: Integrating Finger Print and Face Recognition

    Directory of Open Access Journals (Sweden)

    Shahram Najam

    2018-01-01

    Full Text Available A novel hybrid design based electronic voting system is proposed, implemented and analyzed. The proposed system uses two voter verification techniques to give better results in comparison to single identification based systems. Finger print and facial recognition based methods are used for voter identification. Cross verification of a voter during an election process provides better accuracy than single parameter identification method. The facial recognition system uses Viola-Jones algorithm along with rectangular Haar feature selection method for detection and extraction of features to develop a biometric template and for feature extraction during the voting process. Cascaded machine learning based classifiers are used for comparing the features for identity verification using GPCA (Generalized Principle Component Analysis and K-NN (K-Nearest Neighbor. It is accomplished through comparing the Eigen-vectors of the extracted features with the biometric template pre-stored in the election regulatory body database. The results of the proposed system show that the proposed cascaded design based system performs better than the systems using other classifiers or separate schemes i.e. facial or finger print based schemes. The proposed system will be highly useful for real time applications due to the reason that it has 91% accuracy under nominal light in terms of facial recognition.

  2. Recognition of facial expressions is moderated by Islamic cues.

    Science.gov (United States)

    Kret, Mariska E; Fischer, Agneta H

    2018-05-01

    Recognising emotions from faces that are partly covered is more difficult than from fully visible faces. The focus of the present study is on the role of an Islamic versus non-Islamic context, i.e. Islamic versus non-Islamic headdress in perceiving emotions. We report an experiment that investigates whether briefly presented (40 ms) facial expressions of anger, fear, happiness and sadness are perceived differently when covered by a niqāb or turban, compared to a cap and shawl. In addition, we examined whether oxytocin, a neuropeptide regulating affection, bonding and cooperation between ingroup members and fostering outgroup vigilance and derogation, would differentially impact on emotion recognition from wearers of Islamic versus non-Islamic headdresses. The results first of all show that the recognition of happiness was more accurate when the face was covered by a Western compared to Islamic headdress. Second, participants more often incorrectly assigned sadness to a face covered by an Islamic headdress compared to a cap and shawl. Third, when correctly recognising sadness, they did so faster when the face was covered by an Islamic compared to Western headdress. Fourth, oxytocin did not modulate any of these effects. Implications for theorising about the role of group membership on emotion perception are discussed.

  3. Are event-related potentials to dynamic facial expressions of emotion related to individual differences in the accuracy of processing facial expressions and identity?

    Science.gov (United States)

    Recio, Guillermo; Wilhelm, Oliver; Sommer, Werner; Hildebrandt, Andrea

    2017-04-01

    Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain-behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = -.51) and memory (r = -.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.

  4. Children's Representations of Facial Expression and Identity: Identity-Contingent Expression Aftereffects

    Science.gov (United States)

    Vida, Mark D.; Mondloch, Catherine J.

    2009-01-01

    This investigation used adaptation aftereffects to examine developmental changes in the perception of facial expressions. Previous studies have shown that adults' perceptions of ambiguous facial expressions are biased following adaptation to intense expressions. These expression aftereffects are strong when the adapting and probe expressions share…

  5. Facial Emotion Recognition Impairment in Patients with Parkinson's Disease and Isolated Apathy

    Directory of Open Access Journals (Sweden)

    Mercè Martínez-Corral

    2010-01-01

    Full Text Available Apathy is a frequent feature of Parkinson's disease (PD, usually related with executive dysfunction. However, in a subgroup of PD patients apathy may represent the only or predominant neuropsychiatric feature. To understand the mechanisms underlying apathy in PD, we investigated emotional processing in PD patients with and without apathy and in healthy controls (HC, assessed by a facial emotion recognition task (FERT. We excluded PD patients with cognitive impairment, depression, other affective disturbances and previous surgery for PD. PD patients with apathy scored significantly worse in the FERT, performing worse in fear, anger, and sadness recognition. No differences, however, were found between nonapathetic PD patients and HC. These findings suggest the existence of a disruption of emotional-affective processing in cognitive preserved PD patients with apathy. To identify specific dysfunction of limbic structures in PD, patients with isolated apathy may have therapeutic and prognostic implications.

  6. Advances in face detection and facial image analysis

    CERN Document Server

    Celebi, M; Smolka, Bogdan

    2016-01-01

    This book presents the state-of-the-art in face detection and analysis. It outlines new research directions, including in particular psychology-based facial dynamics recognition, aimed at various applications such as behavior analysis, deception detection, and diagnosis of various psychological disorders. Topics of interest include face and facial landmark detection, face recognition, facial expression and emotion analysis, facial dynamics analysis, face classification, identification, and clustering, and gaze direction and head pose estimation, as well as applications of face analysis.

  7. Does facial resemblance enhance cooperation?

    Directory of Open Access Journals (Sweden)

    Trang Giang

    Full Text Available Facial self-resemblance has been proposed to serve as a kinship cue that facilitates cooperation between kin. In the present study, facial resemblance was manipulated by morphing stimulus faces with the participants' own faces or control faces (resulting in self-resemblant or other-resemblant composite faces. A norming study showed that the perceived degree of kinship was higher for the participants and the self-resemblant composite faces than for actual first-degree relatives. Effects of facial self-resemblance on trust and cooperation were tested in a paradigm that has proven to be sensitive to facial trustworthiness, facial likability, and facial expression. First, participants played a cooperation game in which the composite faces were shown. Then, likability ratings were assessed. In a source memory test, participants were required to identify old and new faces, and were asked to remember whether the faces belonged to cooperators or cheaters in the cooperation game. Old-new recognition was enhanced for self-resemblant faces in comparison to other-resemblant faces. However, facial self-resemblance had no effects on the degree of cooperation in the cooperation game, on the emotional evaluation of the faces as reflected in the likability judgments, and on the expectation that a face belonged to a cooperator rather than to a cheater. Therefore, the present results are clearly inconsistent with the assumption of an evolved kin recognition module built into the human face recognition system.

  8. Dazzles, decoys, and deities : the Janus face of anti-facial recognition masks

    NARCIS (Netherlands)

    de Vries, P.B.

    2017-01-01

    Over the past few years a growing number of artists have critiqued the ubiquity of identity recognition technologies. Specifically, the use of these technologies by state security programs, tech-giants and multinational corporations has met with opposition and controversy. A popular form of

  9. A small-world network model of facial emotion recognition.

    Science.gov (United States)

    Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto

    2016-01-01

    Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.

  10. Facial emotion recognition in male antisocial personality disorders with or without adult attention deficit hyperactivity disorder.

    Science.gov (United States)

    Bagcioglu, Erman; Isikli, Hasmet; Demirel, Husrev; Sahin, Esat; Kandemir, Eyup; Dursun, Pinar; Yuksek, Erhan; Emul, Murat

    2014-07-01

    We aimed to investigate facial emotion recognition abilities in violent individuals with antisocial personality disorder who have comorbid attention deficient hyperactivity disorder (ADHD) or not. The photos of happy, surprised, fearful, sad, angry, disgust, and neutral facial expressions and Wender Utah Rating Scale have been performed in all groups. The mean ages were as follows: in antisocial personality disorder with ADHD 22.0 ± 1.59, in pure antisocial individuals 21.90 ± 1.80 and in controls 22.97 ± 2.85 (p>0.05). The mean score in Wender Utah Rating Scale was significantly different between groups (p0.05) excluding disgust faces which was significantly impaired in ASPD+ADHD and pure ASPD groups. Antisocial individuals with attention deficient and hyperactivity had spent significantly more time to each facial emotion than healthy controls (pantisocial individual had more time to recognize disgust and neutral faces than healthy controls (pantisocial individuals and antisocial individuals with ADHD. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Local binary pattern variants-based adaptive texture features analysis for posed and nonposed facial expression recognition

    Science.gov (United States)

    Sultana, Maryam; Bhatti, Naeem; Javed, Sajid; Jung, Soon Ki

    2017-09-01

    Facial expression recognition (FER) is an important task for various computer vision applications. The task becomes challenging when it requires the detection and encoding of macro- and micropatterns of facial expressions. We present a two-stage texture feature extraction framework based on the local binary pattern (LBP) variants and evaluate its significance in recognizing posed and nonposed facial expressions. We focus on the parametric limitations of the LBP variants and investigate their effects for optimal FER. The size of the local neighborhood is an important parameter of the LBP technique for its extraction in images. To make the LBP adaptive, we exploit the granulometric information of the facial images to find the local neighborhood size for the extraction of center-symmetric LBP (CS-LBP) features. Our two-stage texture representations consist of an LBP variant and the adaptive CS-LBP features. Among the presented two-stage texture feature extractions, the binarized statistical image features and adaptive CS-LBP features were found showing high FER rates. Evaluation of the adaptive texture features shows competitive and higher performance than the nonadaptive features and other state-of-the-art approaches, respectively.

  12. Social and attention-to-detail subclusters of autistic traits differentially predict looking at eyes and face identity recognition ability.

    Science.gov (United States)

    Davis, Joshua; McKone, Elinor; Zirnsak, Marc; Moore, Tirin; O'Kearney, Richard; Apthorp, Deborah; Palermo, Romina

    2017-02-01

    This study distinguished between different subclusters of autistic traits in the general population and examined the relationships between these subclusters, looking at the eyes of faces, and the ability to recognize facial identity. Using the Autism Spectrum Quotient (AQ) measure in a university-recruited sample, we separate the social aspects of autistic traits (i.e., those related to communication and social interaction; AQ-Social) from the non-social aspects, particularly attention-to-detail (AQ-Attention). We provide the first evidence that these social and non-social aspects are associated differentially with looking at eyes: While AQ-Social showed the commonly assumed tendency towards reduced looking at eyes, AQ-Attention was associated with increased looking at eyes. We also report that higher attention-to-detail (AQ-Attention) was then indirectly related to improved face recognition, mediated by increased number of fixations to the eyes during face learning. Higher levels of socially relevant autistic traits (AQ-Social) trended in the opposite direction towards being related to poorer face recognition (significantly so in females on the Cambridge Face Memory Test). There was no evidence of any mediated relationship between AQ-Social and face recognition via reduced looking at the eyes. These different effects of AQ-Attention and AQ-Social suggest face-processing studies in Autism Spectrum Disorder might similarly benefit from considering symptom subclusters. Additionally, concerning mechanisms of face recognition, our results support the view that more looking at eyes predicts better face memory. © 2016 The British Psychological Society.

  13. The change of expression configuration affects identity-dependent expression aftereffect but not identity-independent expression aftereffect

    Directory of Open Access Journals (Sweden)

    Miao eSong

    2015-12-01

    Full Text Available The present study examined the influence of expression configuration on cross-identity expression aftereffect. The expression configuration refers to the spatial arrangement of facial features in a face for conveying an emotion, e.g., an open-mouth smile versus a closed-mouth smile. In the first of two experiments, the expression aftereffect is measured using across-identity/cross-expression configuration factorial design. The facial identities of test faces were the same or different from the adaptor, while orthogonally, the expression configurations of those facial identities were also the same or different. The result shows that the change of expression configuration impaired the expression aftereffect when the facial identities of adaptor and tests were the same; however, the impairment effect disappears when facial identities were different, indicating the identity-independent expression representation is more robust to the change of the expression configuration in comparison with the identity-dependent expression representation. In the second experiment, we used schematic line faces as adaptors and real faces as tests to minimize the similarity between the adaptor and tests, which is expected to exclude the contribution from the identity-dependent expression representation to expression aftereffect. The second experiment yields a similar result as the identity-independent expression aftereffect observed in Experiment 1. The findings indicate the different neural sensitivities to expression configuration for identity-dependent and identity-independent expression systems.

  14. Psilocybin biases facial recognition, goal-directed behavior, and mood state toward positive relative to negative emotions through different serotonergic subreceptors.

    Science.gov (United States)

    Kometer, Michael; Schmidt, André; Bachmann, Rosilla; Studerus, Erich; Seifritz, Erich; Vollenweider, Franz X

    2012-12-01

    Serotonin (5-HT) 1A and 2A receptors have been associated with dysfunctional emotional processing biases in mood disorders. These receptors further predominantly mediate the subjective and behavioral effects of psilocybin and might be important for its recently suggested antidepressive effects. However, the effect of psilocybin on emotional processing biases and the specific contribution of 5-HT2A receptors across different emotional domains is unknown. In a randomized, double-blind study, 17 healthy human subjects received on 4 separate days placebo, psilocybin (215 μg/kg), the preferential 5-HT2A antagonist ketanserin (50 mg), or psilocybin plus ketanserin. Mood states were assessed by self-report ratings, and behavioral and event-related potential measurements were used to quantify facial emotional recognition and goal-directed behavior toward emotional cues. Psilocybin enhanced positive mood and attenuated recognition of negative facial expression. Furthermore, psilocybin increased goal-directed behavior toward positive compared with negative cues, facilitated positive but inhibited negative sequential emotional effects, and valence-dependently attenuated the P300 component. Ketanserin alone had no effects but blocked the psilocybin-induced mood enhancement and decreased recognition of negative facial expression. This study shows that psilocybin shifts the emotional bias across various psychological domains and that activation of 5-HT2A receptors is central in mood regulation and emotional face recognition in healthy subjects. These findings may not only have implications for the pathophysiology of dysfunctional emotional biases but may also provide a framework to delineate the mechanisms underlying psylocybin's putative antidepressant effects. Copyright © 2012 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  15. Formations of Femininity: Science and Aesthetics in Facial Feminization Surgery.

    Science.gov (United States)

    Plemons, Eric

    2017-10-01

    Facial feminization surgery (FFS) is a set of bone and soft tissue reconstructive surgical procedures intended to feminize the faces of trans- women in order to make their identities as women recognizable to others. In this article, I explore how the identification of facial femininity was negotiated in two FFS surgeons' practices. One committed to the metrics of normal skeletal form and the other to aspirational aesthetics of individual optimization; I argue that surgeons' competing clinical approaches illustrate a constitutive tension in the proliferating therapeutic logics of trans- medicine. The growing popularity of surgical practices like FFS demonstrates a shift in American trans- therapeutics away from a singular focus on the genitalia as the location of bodily sex and toward understandings of sex as a product of social recognition.

  16. Facial biometrics of Yorubas of Nigeria using Akinlolu-Raji image-processing algorithm

    Directory of Open Access Journals (Sweden)

    Adelaja Abdulazeez Akinlolu

    2016-01-01

    Full Text Available Background: Forensic anthropology deals with the establishment of human identity using genetics, biometrics, and face recognition technology. This study aims to compute facial biometrics of Yorubas of Osun State of Nigeria using a novel Akinlolu-Raji image-processing algorithm. Materials and Methods: Three hundred Yorubas of Osun State (150 males and 150 females, aged 15–33 years were selected as subjects for the study with informed consents and when established as Yorubas by parents and grandparents. Height, body weight, and facial biometrics (evaluated on three-dimensional [3D] facial photographs were measured on all subjects. The novel Akinlolu-Raji image-processing algorithm for forensic face recognition was developed using the modified row method of computer programming. Facial width, total face height, short forehead height, long forehead height, upper face height, nasal bridge length, nose height, morphological face height, and lower face height computed from readings of the Akinlolu-Raji image-processing algorithm were analyzed using z-test (P ≤ 0.05 of 2010 Microsoft Excel statistical software. Results: Statistical analyzes of facial measurements showed nonsignificant higher mean values (P > 0.05 in Yoruba males compared to females. Yoruba males and females have the leptoprosopic face type based on classifications of face types from facial indices. Conclusions: Akinlolu-Raji image-processing algorithm can be employed for computing anthropometric, forensic, diagnostic, or any other measurements on 2D and 3D images, and data computed from its readings can be converted to actual or life sizes as obtained in 1D measurements. Furthermore, Yoruba males and females have the leptoprosopic face type.

  17. An Efficient Multimodal 2D + 3D Feature-based Approach to Automatic Facial Expression Recognition

    KAUST Repository

    Li, Huibin

    2015-07-29

    We present a fully automatic multimodal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU-3DFE database. Our approach combines multi-order gradient-based local texture and shape descriptors in order to achieve efficiency and robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar-CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are used to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both feature-level and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU-3DFE benchmark to compare our approach to the state-of-the-art ones. Our multimodal feature-based approach outperforms the others by achieving an average recognition accuracy of 86.32%. Moreover, a good generalization ability is shown on the Bosphorus database.

  18. An Efficient Multimodal 2D + 3D Feature-based Approach to Automatic Facial Expression Recognition

    KAUST Repository

    Li, Huibin; Ding, Huaxiong; Huang, Di; Wang, Yunhong; Zhao, Xi; Morvan, Jean-Marie; Chen, Liming

    2015-01-01

    We present a fully automatic multimodal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU-3DFE database. Our approach combines multi-order gradient-based local texture and shape descriptors in order to achieve efficiency and robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar-CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are used to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both feature-level and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU-3DFE benchmark to compare our approach to the state-of-the-art ones. Our multimodal feature-based approach outperforms the others by achieving an average recognition accuracy of 86.32%. Moreover, a good generalization ability is shown on the Bosphorus database.

  19. Effects of Early Neglect Experience on Recognition and Processing of Facial Expressions: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Victoria Doretto

    2018-01-01

    Full Text Available Background: Child neglect is highly prevalent and associated with a series of biological and social consequences. Early neglect may alter the recognition of emotional faces, but its precise impact remains unclear. We aim to review and analyze data from recent literature about recognition and processing of facial expressions in individuals with history of childhood neglect. Methods: We conducted a systematic review using PubMed, PsycINFO, ScIELO and EMBASE databases in the search of studies for the past 10 years. Results: In total, 14 studies were selected and critically reviewed. A heterogeneity was detected across methods and sample frames. Results were mixed across studies. Different forms of alterations to perception of facial expressions were found across 12 studies. There was alteration to the recognition and processing of both positive and negative emotions, but for emotional face processing there was predominance in alteration toward negative emotions. Conclusions: This is the first review to examine specifically the effects of early neglect experience as a prevalent condition of child maltreatment. The results of this review are inconclusive due to methodological diversity, implement of distinct instruments and differences in the composition of the samples. Despite these limitations, some studies support our hypothesis that individuals with history of early negligence may present alteration to the ability to perceive face expressions of emotions. The article brings relevant information that can help in the development of more effective therapeutic strategies to reduce the impact of neglect on the cognitive and emotional development of the child.

  20. Forensic Facial Reconstruction: The Final Frontier.

    Science.gov (United States)

    Gupta, Sonia; Gupta, Vineeta; Vij, Hitesh; Vij, Ruchieka; Tyagi, Nutan

    2015-09-01

    Forensic facial reconstruction can be used to identify unknown human remains when other techniques fail. Through this article, we attempt to review the different methods of facial reconstruction reported in literature. There are several techniques of doing facial reconstruction, which vary from two dimensional drawings to three dimensional clay models. With the advancement in 3D technology, a rapid, efficient and cost effective computerized 3D forensic facial reconstruction method has been developed which has brought down the degree of error previously encountered. There are several methods of manual facial reconstruction but the combination Manchester method has been reported to be the best and most accurate method for the positive recognition of an individual. Recognition allows the involved government agencies to make a list of suspected victims'. This list can then be narrowed down and a positive identification may be given by the more conventional method of forensic medicine. Facial reconstruction allows visual identification by the individual's family and associates to become easy and more definite.

  1. Action identity: evidence from self-recognition, prediction, and coordination.

    Science.gov (United States)

    Knoblich, Günther; Flach, Rüdiger

    2003-12-01

    Prior research suggests that the action system is responsible for creating an immediate sense of self by determining whether certain sensations and perceptions are the result of one's own actions. In addition, it is assumed that declarative, episodic, or autobiographical memories create a temporally extended sense of self or some form of identity. In the present article, we review recent evidence suggesting that action (procedural) knowledge also forms part of a person's identity, an action identity, so to speak. Experiments that addressed self-recognition of past actions, prediction, and coordination provide ample evidence for this assumption. The phenomena observed in these experiments can be explained by the assumption that observing an action results in the activation of action representations, the more so, when the action observed corresponds to the way in which the observer would produce it.

  2. Social appraisal influences recognition of emotions.

    Science.gov (United States)

    Mumenthaler, Christian; Sander, David

    2012-06-01

    The notion of social appraisal emphasizes the importance of a social dimension in appraisal theories of emotion by proposing that the way an individual appraises an event is influenced by the way other individuals appraise and feel about the same event. This study directly tested this proposal by asking participants to recognize dynamic facial expressions of emotion (fear, happiness, or anger in Experiment 1; fear, happiness, anger, or neutral in Experiment 2) in a target face presented at the center of a screen while a contextual face, which appeared simultaneously in the periphery of the screen, expressed an emotion (fear, happiness, anger) or not (neutral) and either looked at the target face or not. We manipulated gaze direction to be able to distinguish between a mere contextual effect (gaze away from both the target face and the participant) and a specific social appraisal effect (gaze toward the target face). Results of both experiments provided evidence for a social appraisal effect in emotion recognition, which differed from the mere effect of contextual information: Whereas facial expressions were identical in both conditions, the direction of the gaze of the contextual face influenced emotion recognition. Social appraisal facilitated the recognition of anger, happiness, and fear when the contextual face expressed the same emotion. This facilitation was stronger than the mere contextual effect. Social appraisal also allowed better recognition of fear when the contextual face expressed anger and better recognition of anger when the contextual face expressed fear. 2012 APA, all rights reserved

  3. Effect of positive emotion on consolidation of memory for faces: the modulation of facial valence and facial gender.

    Science.gov (United States)

    Wang, Bo

    2013-01-01

    Studies have shown that emotion elicited after learning enhances memory consolidation. However, no prior studies have used facial photos as stimuli. This study examined the effect of post-learning positive emotion on consolidation of memory for faces. During the learning participants viewed neutral, positive, or negative faces. Then they were assigned to a condition in which they either watched a 9-minute positive video clip, or a 9-minute neutral video. Then 30 minutes after the learning participants took a surprise memory test, in which they made "remember", "know", and "new" judgements. The findings are: (1) Positive emotion enhanced consolidation of recognition for negative male faces, but impaired consolidation of recognition for negative female faces; (2) For males, recognition for negative faces was equivalent to that for positive faces; for females, recognition for negative faces was better than that for positive faces. Our study provides the important evidence that effect of post-learning emotion on memory consolidation can extend to facial stimuli and such an effect can be modulated by facial valence and facial gender. The findings may shed light on establishing models concerning the influence of emotion on memory consolidation.

  4. A recurrent dynamic model for correspondence-based face recognition.

    Science.gov (United States)

    Wolfrum, Philipp; Wolff, Christian; Lücke, Jörg; von der Malsburg, Christoph

    2008-12-29

    Our aim here is to create a fully neural, functionally competitive, and correspondence-based model for invariant face recognition. By recurrently integrating information about feature similarities, spatial feature relations, and facial structure stored in memory, the system evaluates face identity ("what"-information) and face position ("where"-information) using explicit representations for both. The network consists of three functional layers of processing, (1) an input layer for image representation, (2) a middle layer for recurrent information integration, and (3) a gallery layer for memory storage. Each layer consists of cortical columns as functional building blocks that are modeled in accordance with recent experimental findings. In numerical simulations we apply the system to standard benchmark databases for face recognition. We find that recognition rates of our biologically inspired approach lie in the same range as recognition rates of recent and purely functionally motivated systems.

  5. Measuring facial expression of emotion.

    Science.gov (United States)

    Wolf, Karsten

    2015-12-01

    Research into emotions has increased in recent decades, especially on the subject of recognition of emotions. However, studies of the facial expressions of emotion were compromised by technical problems with visible video analysis and electromyography in experimental settings. These have only recently been overcome. There have been new developments in the field of automated computerized facial recognition; allowing real-time identification of facial expression in social environments. This review addresses three approaches to measuring facial expression of emotion and describes their specific contributions to understanding emotion in the healthy population and in persons with mental illness. Despite recent progress, studies on human emotions have been hindered by the lack of consensus on an emotion theory suited to examining the dynamic aspects of emotion and its expression. Studying expression of emotion in patients with mental health conditions for diagnostic and therapeutic purposes will profit from theoretical and methodological progress.

  6. Context Effects on Facial Affect Recognition in Schizophrenia and Autism: Behavioral and Eye-Tracking Evidence.

    Science.gov (United States)

    Sasson, Noah J; Pinkham, Amy E; Weittenhiller, Lauren P; Faso, Daniel J; Simpson, Claire

    2016-05-01

    Although Schizophrenia (SCZ) and Autism Spectrum Disorder (ASD) share impairments in emotion recognition, the mechanisms underlying these impairments may differ. The current study used the novel "Emotions in Context" task to examine how the interpretation and visual inspection of facial affect is modulated by congruent and incongruent emotional contexts in SCZ and ASD. Both adults with SCZ (n= 44) and those with ASD (n= 21) exhibited reduced affect recognition relative to typically-developing (TD) controls (n= 39) when faces were integrated within broader emotional scenes but not when they were presented in isolation, underscoring the importance of using stimuli that better approximate real-world contexts. Additionally, viewing faces within congruent emotional scenes improved accuracy and visual attention to the face for controls more so than the clinical groups, suggesting that individuals with SCZ and ASD may not benefit from the presence of complementary emotional information as readily as controls. Despite these similarities, important distinctions between SCZ and ASD were found. In every condition, IQ was related to emotion-recognition accuracy for the SCZ group but not for the ASD or TD groups. Further, only the ASD group failed to increase their visual attention to faces in incongruent emotional scenes, suggesting a lower reliance on facial information within ambiguous emotional contexts relative to congruent ones. Collectively, these findings highlight both shared and distinct social cognitive processes in SCZ and ASD that may contribute to their characteristic social disabilities. © The Author 2015. Published by Oxford University Press on behalf of the Maryland Psychiatric Research Center. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  7. The effects of alcohol on the recognition of facial expressions and microexpressions of emotion: enhanced recognition of disgust and contempt.

    Science.gov (United States)

    Felisberti, Fatima; Terry, Philip

    2015-09-01

    The study compared alcohol's effects on the recognition of briefly displayed facial expressions of emotion (so-called microexpressions) with expressions presented for a longer period. Using a repeated-measures design, we tested 18 participants three times (counterbalanced), after (i) a placebo drink, (ii) a low-to-moderate dose of alcohol (0.17 g/kg women; 0.20 g/kg men) and (iii) a moderate-to-high dose of alcohol (0.52 g/kg women; 0.60 g/kg men). On each session, participants were presented with stimuli representing six emotions (happiness, sadness, anger, fear, disgust and contempt) overlaid on a generic avatar in a six-alternative forced-choice paradigm. A neutral expression (1 s) preceded and followed a target expression presented for 200 ms (microexpressions) or 400 ms. Participants mouse clicked the correct answer. The recognition of disgust was significantly better after the high dose of alcohol than after the low dose or placebo drinks at both durations of stimulus presentation. A similar profile of effects was found for the recognition of contempt. There were no effects on response latencies. Alcohol can increase sensitivity to expressions of disgust and contempt. Such effects are not dependent on stimulus duration up to 400 ms and may reflect contextual modulation of alcohol's effects on emotion recognition. Copyright © 2015 John Wiley & Sons, Ltd.

  8. Using Computerized Games to Teach Face Recognition Skills to Children with Autism Spectrum Disorder: The "Let's Face It!" Program

    Science.gov (United States)

    Tanaka, James W.; Wolf, Julie M.; Klaiman, Cheryl; Koenig, Kathleen; Cockburn, Jeffrey; Herlihy, Lauren; Brown, Carla; Stahl, Sherin; Kaiser, Martha D.; Schultz, Robert T.

    2010-01-01

    Background: An emerging body of evidence indicates that relative to typically developing children, children with autism are selectively impaired in their ability to recognize facial identity. A critical question is whether face recognition skills can be enhanced through a direct training intervention. Methods: In a randomized clinical trial,…

  9. "Now I see it, now I don't": Determining Threshold Levels of Facial Emotion Recognition for Use in Patient Populations.

    Science.gov (United States)

    Chiu, Isabelle; Gfrörer, Regina I; Piguet, Olivier; Berres, Manfred; Monsch, Andreas U; Sollberger, Marc

    2015-08-01

    The importance of including measures of emotion processing, such as tests of facial emotion recognition (FER), as part of a comprehensive neuropsychological assessment is being increasingly recognized. In clinical settings, FER tests need to be sensitive, short, and easy to administer, given the limited time available and patient limitations. Current tests, however, commonly use stimuli that either display prototypical emotions, bearing the risk of ceiling effects and unequal task difficulty, or are cognitively too demanding and time-consuming. To overcome these limitations in FER testing in patient populations, we aimed to define FER threshold levels for the six basic emotions in healthy individuals. Forty-nine healthy individuals between 52 and 79 years of age were asked to identify the six basic emotions at different intensity levels (25%, 50%, 75%, 100%, and 125% of the prototypical emotion). Analyses uncovered differing threshold levels across emotions and sex of facial stimuli, ranging from 50% up to 100% intensities. Using these findings as "healthy population benchmarks", we propose to apply these threshold levels to clinical populations either as facial emotion recognition or intensity rating tasks. As part of any comprehensive social cognition test battery, this approach should allow for a rapid and sensitive assessment of potential FER deficits.

  10. Mother's Happiness with Cognitive - Executive Functions and Facial Emotional Recognition in School Children with Down Syndrome.

    OpenAIRE

    Maryam Malmir; Maryam Seifenaraghi; Dariush D Farhud; G Ali Afrooz; Mohammad Khanahmadi

    2015-01-01

    Background: According to the mother?s key roles in bringing up emotional and cognitive abilities of mentally retarded children and respect to positive psychology in recent decades, this research is administered to assess the relation between mother?s happiness level with cognitive- executive functions (i.e. attention, working memory, inhibition and planning) and facial emotional recognition ability as two factors in learning and adjustment skills in mentally retarded children with Down syndro...

  11. A Brief Review of Facial Emotion Recognition Based on Visual Information.

    Science.gov (United States)

    Ko, Byoung Chul

    2018-01-30

    Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling "end-to-end" learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN) for the spatial features of an individual frame and long short-term memory (LSTM) for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work.

  12. Italian normative data and validation of two neuropsychological tests of face recognition: Benton Facial Recognition Test and Cambridge Face Memory Test.

    Science.gov (United States)

    Albonico, Andrea; Malaspina, Manuela; Daini, Roberta

    2017-09-01

    The Benton Facial Recognition Test (BFRT) and Cambridge Face Memory Test (CFMT) are two of the most common tests used to assess face discrimination and recognition abilities and to identify individuals with prosopagnosia. However, recent studies highlighted that participant-stimulus match ethnicity, as much as gender, has to be taken into account in interpreting results from these tests. Here, in order to obtain more appropriate normative data for an Italian sample, the CFMT and BFRT were administered to a large cohort of young adults. We found that scores from the BFRT are not affected by participants' gender and are only slightly affected by participant-stimulus ethnicity match, whereas both these factors seem to influence the scores of the CFMT. Moreover, the inclusion of a sample of individuals with suspected face recognition impairment allowed us to show that the use of more appropriate normative data can increase the BFRT efficacy in identifying individuals with face discrimination impairments; by contrast, the efficacy of the CFMT in classifying individuals with a face recognition deficit was confirmed. Finally, our data show that the lack of inversion effect (the difference between the total score of the upright and inverted versions of the CFMT) could be used as further index to assess congenital prosopagnosia. Overall, our results confirm the importance of having norms derived from controls with a similar experience of faces as the "potential" prosopagnosic individuals when assessing face recognition abilities.

  13. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  14. Control de accesos mediante reconocimiento facial

    OpenAIRE

    Rodríguez Rodríguez, Bruno

    2011-01-01

    En esta memoria expone el trabajo que se ha llevado a cabo para intentar crear un sistema de reconocimiento facial. This paper outlines the work carried out in the attempt of creating a facial recognition system. En aquesta memòria exposa el treball que s'ha dut a terme en l'intent de crear un sistema de reconeixement facial.

  15. Emotion recognition in Chinese people with schizophrenia.

    Science.gov (United States)

    Chan, Chetwyn C H; Wong, Raymond; Wang, Kai; Lee, Tatia M C

    2008-01-15

    This study examined whether people with paranoid or nonparanoid schizophrenia would show emotion-recognition deficits, both facial and prosodic. Furthermore, this study examined the neuropsychological predictors of emotion-recognition ability in people with schizophrenia. Participants comprised 86 people, of whom: 43 were people diagnosed with schizophrenia and 43 were controls. The 43 clinical participants were placed in either the paranoid group (n=19) or the nonparanoid group (n=24). Each participant was administered the Facial Emotion Recognition task and the Prosodic Recognition task, together with other neuropsychological measures of attention and visual perception. People suffering from nonparanoid schizophrenia were found to have deficits in both facial and prosodic emotion recognition, after correction for the differences in the intelligence and depression scores between the two groups. Furthermore, spatial perception was observed to be the best predictor of facial emotion identification in individuals with nonparanoid schizophrenia, whereas attentional processing control predicted both prosodic emotion identification and discrimination in nonparanoid schizophrenia patients. Our findings suggest that patients with schizophrenia in remission may still suffer from impairment of certain aspects of emotion recognition.

  16. The impact of limbic system morphology on facial emotion recognition in bipolar I disorder and healthy controls

    Directory of Open Access Journals (Sweden)

    Bio DS

    2013-05-01

    Full Text Available Danielle Soares Bio,1 Márcio Gerhardt Soeiro-de-Souza,1 Maria Concepción Garcia Otaduy,2 Rodrigo Machado-Vieira,3 Ricardo Alberto Moreno11Mood Disorders Unit, 2Institute of Radiology, Department and Institute of Psychiatry, School of Medicine, University of São Paulo, São Paulo, Brazil; 3Experimental Therapeutics and Pathophysiology Branch (ETPB, National Institute of Mental Health, NIMH NIH, Bethesda, MD, USAIntroduction: Impairments in facial emotion recognition (FER have been reported in bipolar disorder (BD subjects during all mood states. This study aims to investigate the impact of limbic system morphology on FER scores in BD subjects and healthy controls.Material and methods: Thirty-nine euthymic BD I (type I subjects and 40 healthy controls were subjected to a battery of FER tests and examined with 3D structural imaging of the amygdala and hippocampus.Results: The volume of these structures demonstrated a differential pattern of influence on FER scores in BD subjects and controls. In our control sample, larger left and right amygdala demonstrated to be associated to less recognition of sadness faces. In BD group, there was no impact of amygdala volume on FER but we observed a negative impact of the left hippocampus volume in the recognition of happiness while the right hippocampus volume positively impacted on the scores of happiness.Conclusion: Our results indicate that amygdala and hippocampus volumes have distinct effects on FER in BD subjects compared to controls. Knowledge of the neurobiological basis of the illness may help to provide further insights on the role of treatments and psychosocial interventions for BD. Further studies should explore how these effects of amygdala and hippocampus volumes on FER are associated with social networks and social network functioning.Keywords: bipolar disorder, social cognition, facial emotion recognition

  17. Developmental differences in holistic interference of facial part recognition.

    Directory of Open Access Journals (Sweden)

    Kazuyo Nakabayashi

    Full Text Available Research has shown that adults' recognition of a facial part can be disrupted if the part is learnt without a face context but tested in a whole face. This has been interpreted as the holistic interference effect. The present study investigated whether children of 6- and 9-10-year-olds would show a similar effect. Participants were asked to judge whether a probe part was the same as or different from a test part whereby the part was presented either in isolation or in a whole face. The results showed that while all the groups were susceptible to a holistic interference, the youngest group was most severely affected. Contrary to the view that piecemeal processing precedes holistic processing in the cognitive development, our findings demonstrate that holistic processing is already present at 6 years of age. It is the ability to inhibit the influence of holistic information on piecemeal processing that seems to require a longer period of development into at an older and adult age.

  18. Developmental Differences in Holistic Interference of Facial Part Recognition

    Science.gov (United States)

    Nakabayashi, Kazuyo; Liu, Chang Hong

    2013-01-01

    Research has shown that adults’ recognition of a facial part can be disrupted if the part is learnt without a face context but tested in a whole face. This has been interpreted as the holistic interference effect. The present study investigated whether children of 6- and 9–10-year-olds would show a similar effect. Participants were asked to judge whether a probe part was the same as or different from a test part whereby the part was presented either in isolation or in a whole face. The results showed that while all the groups were susceptible to a holistic interference, the youngest group was most severely affected. Contrary to the view that piecemeal processing precedes holistic processing in the cognitive development, our findings demonstrate that holistic processing is already present at 6 years of age. It is the ability to inhibit the influence of holistic information on piecemeal processing that seems to require a longer period of development into at an older and adult age. PMID:24204847

  19. Unobtrusive multimodal emotion detection in adaptive interfaces: speech and facial expressions

    NARCIS (Netherlands)

    Truong, K.P.; Leeuwen, D.A. van; Neerincx, M.A.

    2007-01-01

    Two unobtrusive modalities for automatic emotion recognition are discussed: speech and facial expressions. First, an overview is given of emotion recognition studies based on a combination of speech and facial expressions. We will identify difficulties concerning data collection, data fusion, system

  20. Convolutional neural networks with balanced batches for facial expressions recognition

    Science.gov (United States)

    Battini Sönmez, Elena; Cangelosi, Angelo

    2017-03-01

    This paper considers the issue of fully automatic emotion classification on 2D faces. In spite of the great effort done in recent years, traditional machine learning approaches based on hand-crafted feature extraction followed by the classification stage failed to develop a real-time automatic facial expression recognition system. The proposed architecture uses Convolutional Neural Networks (CNN), which are built as a collection of interconnected processing elements to simulate the brain of human beings. The basic idea of CNNs is to learn a hierarchical representation of the input data, which results in a better classification performance. In this work we present a block-based CNN algorithm, which uses noise, as data augmentation technique, and builds batches with a balanced number of samples per class. The proposed architecture is a very simple yet powerful CNN, which can yield state-of-the-art accuracy on the very competitive benchmark algorithm of the Extended Cohn Kanade database.

  1. Unsupervised learning of facial emotion decoding skills.

    Science.gov (United States)

    Huelle, Jan O; Sack, Benjamin; Broer, Katja; Komlewa, Irina; Anders, Silke

    2014-01-01

    Research on the mechanisms underlying human facial emotion recognition has long focussed on genetically determined neural algorithms and often neglected the question of how these algorithms might be tuned by social learning. Here we show that facial emotion decoding skills can be significantly and sustainably improved by practice without an external teaching signal. Participants saw video clips of dynamic facial expressions of five different women and were asked to decide which of four possible emotions (anger, disgust, fear, and sadness) was shown in each clip. Although no external information about the correctness of the participant's response or the sender's true affective state was provided, participants showed a significant increase of facial emotion recognition accuracy both within and across two training sessions two days to several weeks apart. We discuss several similarities and differences between the unsupervised improvement of facial decoding skills observed in the current study, unsupervised perceptual learning of simple stimuli described in previous studies and practice effects often observed in cognitive tasks.

  2. Evolution of facial color pattern complexity in lemurs.

    Science.gov (United States)

    Rakotonirina, Hanitriniaina; Kappeler, Peter M; Fichtel, Claudia

    2017-11-09

    Interspecific variation in facial color patterns across New and Old World primates has been linked to species recognition and group size. Because group size has opposite effects on interspecific variation in facial color patterns in these two radiations, a study of the third large primate radiation may shed light on convergences and divergences in this context. We therefore compiled published social and ecological data and analyzed facial photographs of 65 lemur species to categorize variation in hair length, hair and skin coloration as well as color brightness. Phylogenetically controlled analyses revealed that group size and the number of sympatric species did not influence the evolution of facial color complexity in lemurs. Climatic factors, however, influenced facial color complexity, pigmentation and hair length in a few facial regions. Hair length in two facial regions was also correlated with group size and may facilitate individual recognition. Since phylogenetic signals were moderate to high for most models, genetic drift may have also played a role in the evolution of facial color patterns of lemurs. In conclusion, social factors seem to have played only a subordinate role in the evolution of facial color complexity in lemurs, and, more generally, group size appears to have no systematic functional effect on facial color complexity across all primates.

  3. An audiovisual emotion recognition system

    Science.gov (United States)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  4. A Brief Review of Facial Emotion Recognition Based on Visual Information

    Science.gov (United States)

    2018-01-01

    Facial emotion recognition (FER) is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling “end-to-end” learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN) for the spatial features of an individual frame and long short-term memory (LSTM) for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work. PMID:29385749

  5. A Brief Review of Facial Emotion Recognition Based on Visual Information

    Directory of Open Access Journals (Sweden)

    Byoung Chul Ko

    2018-01-01

    Full Text Available Facial emotion recognition (FER is an important topic in the fields of computer vision and artificial intelligence owing to its significant academic and commercial potential. Although FER can be conducted using multiple sensors, this review focuses on studies that exclusively use facial images, because visual expressions are one of the main information channels in interpersonal communication. This paper provides a brief review of researches in the field of FER conducted over the past decades. First, conventional FER approaches are described along with a summary of the representative categories of FER systems and their main algorithms. Deep-learning-based FER approaches using deep networks enabling “end-to-end” learning are then presented. This review also focuses on an up-to-date hybrid deep-learning approach combining a convolutional neural network (CNN for the spatial features of an individual frame and long short-term memory (LSTM for temporal features of consecutive frames. In the later part of this paper, a brief review of publicly available evaluation metrics is given, and a comparison with benchmark results, which are a standard for a quantitative comparison of FER researches, is described. This review can serve as a brief guidebook to newcomers in the field of FER, providing basic knowledge and a general understanding of the latest state-of-the-art studies, as well as to experienced researchers looking for productive directions for future work.

  6. Developing a Natural User Interface and Facial Recognition System With OpenCV and the Microsoft Kinect

    Science.gov (United States)

    Gutensohn, Michael

    2018-01-01

    The task for this project was to design, develop, test, and deploy a facial recognition system for the Kennedy Space Center Augmented/Virtual Reality Lab. This system will serve as a means of user authentication as part of the NUI of the lab. The overarching goal is to create a seamless user interface that will allow the user to initiate and interact with AR and VR experiences without ever needing to use a mouse or keyboard at any step in the process.

  7. Fully Automatic Recognition of the Temporal Phases of Facial Actions

    NARCIS (Netherlands)

    Valstar, M.F.; Pantic, Maja

    Past work on automatic analysis of facial expressions has focused mostly on detecting prototypic expressions of basic emotions like happiness and anger. The method proposed here enables the detection of a much larger range of facial behavior by recognizing facial muscle actions [action units (AUs)

  8. Altering sensorimotor feedback disrupts visual discrimination of facial expressions.

    Science.gov (United States)

    Wood, Adrienne; Lupyan, Gary; Sherrin, Steven; Niedenthal, Paula

    2016-08-01

    Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.

  9. Web-based Visualisation of Head Pose and Facial Expressions Changes:

    DEFF Research Database (Denmark)

    Kalliatakis, Grigorios; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2016-01-01

    Despite significant recent advances in the field of head pose estimation and facial expression recognition, raising the cognitive level when analysing human activity presents serious challenges to current concepts. Motivated by the need of generating comprehensible visual representations from...... and accurately estimate head pose changes in unconstrained environment. In order to complete the secondary process of recognising four universal dominant facial expressions (happiness, anger, sadness and surprise), emotion recognition via facial expressions (ERFE) was adopted. After that, a lightweight data...

  10. Face Detection and Recognition

    National Research Council Canada - National Science Library

    Jain, Anil K

    2004-01-01

    This report describes research efforts towards developing algorithms for a robust face recognition system to overcome many of the limitations found in existing two-dimensional facial recognition systems...

  11. Wanting it Too Much: An Inverse Relation Between Social Motivation and Facial Emotion Recognition in Autism Spectrum Disorder

    OpenAIRE

    Garman, Heather D.; Spaulding, Christine J.; Webb, Sara Jane; Mikami, Amori Yee; Morris, James P.; Lerner, Matthew D.

    2016-01-01

    This study examined social motivation and early-stage face perception as frameworks for understanding impairments in facial emotion recognition (FER) in a well-characterized sample of youth with autism spectrum disorders (ASD). Early-stage face perception (N170 event-related potential latency) was recorded while participants completed a standardized FER task, while social motivation was obtained via parent report. Participants with greater social motivation exhibited poorer FER, while those w...

  12. Identity, recognition and redistribution: a critical analysis of Charles Taylor, Axel Honneth and Nancy Fraser’s theories

    Directory of Open Access Journals (Sweden)

    Javier Amadeo

    2017-06-01

    Full Text Available http://dx.doi.org/10.5007/2175-7984.2017v16n35p242 The politics of identity and the idea of recognition have become dominant issues in contemporary political theory. Recognition, as a concept, means that an individual or a social group claims the right to have their identity recognized, directly or throw the mediation of set of institutions. The theories that have evaluated these questions address both important theoretical issues and central political subjects, as the definition of minority rights, national self-determination claims or the challenges posed of our increasingly multicultural societies. The main objective of this paper is to discuss the central arguments presents by Charles Taylor, Axel Honneth and Nancy Fraser emphasizing the discussion around the relationship between recognition and redistribution. A more specific purpose is to analyze the relation between the question of injustice based on the demand of identity and the problem of economic inequality. Finally, we try to understand some of the theoretical and political implications of the idea of difference and the recognition theory in a broader conceptual perspective.

  13. Misinterpretation of facial expression: a cross-cultural study.

    Science.gov (United States)

    Shioiri, T; Someya, T; Helmeste, D; Tang, S W

    1999-02-01

    Accurately recognizing facial emotional expressions is important in psychiatrist-versus-patient interactions. This might be difficult when the physician and patients are from different cultures. More than two decades of research on facial expressions have documented the universality of the emotions of anger, contempt, disgust, fear, happiness, sadness, and surprise. In contrast, some research data supported the concept that there are significant cultural differences in the judgment of emotion. In this pilot study, the recognition of emotional facial expressions in 123 Japanese subjects was evaluated using the Japanese and Caucasian Facial Expression of Emotion (JACFEE) photos. The results indicated that Japanese subjects experienced difficulties in recognizing some emotional facial expressions and misunderstood others as depicted by the posers, when compared to previous studies using American subjects. Interestingly, the sex and cultural background of the poser did not appear to influence the accuracy of recognition. The data suggest that in this young Japanese sample, judgment of certain emotional facial expressions was significantly different from the Americans. Further exploration in this area is warranted due to its importance in cross-cultural clinician-patient interactions.

  14. The role of recognition and interest in physics identity development

    Science.gov (United States)

    Lock, Robynne

    2016-03-01

    While the number of students earning bachelor's degrees in physics has increased in recent years, this number has only recently surpassed the peak value of the 1960s. Additionally, the percentage of women earning bachelor's degrees in physics has stagnated for the past 10 years and may even be declining. We use a physics identity framework consisting of three dimensions to understand how students make their initial career decisions at the end of high school and the beginning of college. The three dimensions consist of recognition (perception that teachers, parents, and peers see the student as a ``physics person''), interest (desire to learn more about physics), and performance/competence (perception of abilities to complete physics related tasks and to understand physics). Using data from the Sustainability and Gender in Engineering survey administered to a nationally representative sample of college students, we built a regression model to determine which identity dimensions have the largest effect on physics career choice and a structural equation model to understand how the identity dimensions are related. Additionally, we used regression models to identify teaching strategies that predict each identity dimension.

  15. You are that smiling guy I met at the party! Socially positive signals foster memory for identities and contexts.

    Science.gov (United States)

    Righi, Stefania; Gronchi, Giorgio; Marzi, Tessa; Rebai, Mohamed; Viggiano, Maria Pia

    2015-07-01

    The emotional influence of facial expressions on memory is well-known whereas the influence of emotional contextual information on memory for emotional faces is yet to be extensively explored. This study investigated the interplay between facial expression and the emotional surrounding context in affecting both memory for identities (item memory) and memory for associative backgrounds (source memory). At the encoding fearful and happy faces were presented embedded in fear or happy scenes (i.e.: fearful faces in fear-scenes, happy faces in happy-scenes, fearful faces in happy-scenes and happy faces in fear-scenes) and participants were asked to judge the emotional congruency of the face-scene compounds (i.e. fearful faces in fear-scenes and happy faces in happy-scenes were congruent compounds). In the recognition phase, the old faces were intermixed with the new ones: all the faces were presented isolated with a neutral expression. Participants were requested to indicate whether each face had been previously presented (item memory). Then, for each old face the memory for the scene originally compounded with the face was tested by a three alternative forced choice recognition task (source memory). The results evidenced that face identity memory is differently modulated by the valence in congruent face-context compounds with better identity recognition (item memory) for happy faces encoded in happy-scenarios. Moreover, also the memory for the surrounding context (source memory) benefits from the association with a smiling face. Our findings highlight that socially positive signals conveyed by smiling faces may prompt memory for identity and context. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Predictive Coding Strategies for Invariant Object Recognition and Volitional Motion Control in Neuromorphic Agents

    Science.gov (United States)

    2015-09-02

    model for scene understanding was proposed based on deep convolutional neural networks to improve recognition accuracy. Facial expression recognition ...A deep-learning-based model for facial expression recognition was formulated. It could recognize emotional status of people regardless of...CVPRW), 2014 IEEE Conference on. IEEE, 2014. DISTRIBUTION A: Distribution approved for public release. 4 Facial Expression Recognition

  17. Unsupervised learning of facial emotion decoding skills

    Directory of Open Access Journals (Sweden)

    Jan Oliver Huelle

    2014-02-01

    Full Text Available Research on the mechanisms underlying human facial emotion recognition has long focussed on genetically determined neural algorithms and often neglected the question of how these algorithms might be tuned by social learning. Here we show that facial emotion decoding skills can be significantly and sustainably improved by practise without an external teaching signal. Participants saw video clips of dynamic facial expressions of five different women and were asked to decide which of four possible emotions (anger, disgust, fear and sadness was shown in each clip. Although no external information about the correctness of the participant’s response or the sender’s true affective state was provided, participants showed a significant increase of facial emotion recognition accuracy both within and across two training sessions two days to several weeks apart. We discuss several similarities and differences between the unsupervised improvement of facial decoding skills observed in the current study, unsupervised perceptual learning of simple stimuli described in previous studies and practise effects often observed in cognitive tasks.

  18. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model.

    Science.gov (United States)

    Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A

    2011-10-01

    Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.

  19. Cross-cultural evaluations of avatar facial expressions designed by Western and Japanese Designers

    DEFF Research Database (Denmark)

    Koda, Tomoko; Rehm, Matthias; André, Elisabeth

    2008-01-01

    The goal of the study is to investigate cultural differences in avatar expression evaluation and apply findings from psychological study in human facial expression recognition. Our previous study using Japanese designed avatars showed there are cultural differences in interpreting avatar facial...... expressions, and the psychological theory that suggests physical proximity affects facial expression recognition accuracy is also applicable to avatar facial expressions. This paper summarizes the early results of the successive experiment that uses western designed avatars. We observed tendencies of cultural...

  20. Mapping the impairment in decoding static facial expressions of emotion in prosopagnosia.

    Science.gov (United States)

    Fiset, Daniel; Blais, Caroline; Royer, Jessica; Richoz, Anne-Raphaëlle; Dugas, Gabrielle; Caldara, Roberto

    2017-08-01

    Acquired prosopagnosia is characterized by a deficit in face recognition due to diverse brain lesions, but interestingly most prosopagnosic patients suffering from posterior lesions use the mouth instead of the eyes for face identification. Whether this bias is present for the recognition of facial expressions of emotion has not yet been addressed. We tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions dedicated for facial expression recognition. PS used mostly the mouth to recognize facial expressions even when the eye area was the most diagnostic. Moreover, PS directed most of her fixations towards the mouth. Her impairment was still largely present when she was instructed to look at the eyes, or when she was forced to look at them. Control participants showed a performance comparable to PS when only the lower part of the face was available. These observations suggest that the deficits observed in PS with static images are not solely attentional, but are rooted at the level of facial information use. This study corroborates neuroimaging findings suggesting that the Occipital Face Area might play a critical role in extracting facial features that are integrated for both face identification and facial expression recognition in static images. © The Author (2017). Published by Oxford University Press.

  1. Ventromedial prefrontal cortex mediates visual attention during facial emotion recognition.

    Science.gov (United States)

    Wolf, Richard C; Philippi, Carissa L; Motzkin, Julian C; Baskaya, Mustafa K; Koenigs, Michael

    2014-06-01

    The ventromedial prefrontal cortex is known to play a crucial role in regulating human social and emotional behaviour, yet the precise mechanisms by which it subserves this broad function remain unclear. Whereas previous neuropsychological studies have largely focused on the role of the ventromedial prefrontal cortex in higher-order deliberative processes related to valuation and decision-making, here we test whether ventromedial prefrontal cortex may also be critical for more basic aspects of orienting attention to socially and emotionally meaningful stimuli. Using eye tracking during a test of facial emotion recognition in a sample of lesion patients, we show that bilateral ventromedial prefrontal cortex damage impairs visual attention to the eye regions of faces, particularly for fearful faces. This finding demonstrates a heretofore unrecognized function of the ventromedial prefrontal cortex-the basic attentional process of controlling eye movements to faces expressing emotion. © The Author (2014). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  2. Vision-Based Navigation and Recognition

    National Research Council Canada - National Science Library

    Rosenfeld, Azriel

    1998-01-01

    .... (4) Invariants: both geometric and other types. (5) Human faces: Analysis of images of human faces, including feature extraction, face recognition, compression, and recognition of facial expressions...

  3. Vision-Based Navigation and Recognition

    National Research Council Canada - National Science Library

    Rosenfeld, Azriel

    1996-01-01

    .... (4) Invariants -- both geometric and other types. (5) Human faces: Analysis of images of human faces, including feature extraction, face recognition, compression, and recognition of facial expressions...

  4. Emotional Faces in Context: Age Differences in Recognition Accuracy and Scanning Patterns

    Science.gov (United States)

    Noh, Soo Rim; Isaacowitz, Derek M.

    2014-01-01

    While age-related declines in facial expression recognition are well documented, previous research relied mostly on isolated faces devoid of context. We investigated the effects of context on age differences in recognition of facial emotions and in visual scanning patterns of emotional faces. While their eye movements were monitored, younger and older participants viewed facial expressions (i.e., anger, disgust) in contexts that were emotionally congruent, incongruent, or neutral to the facial expression to be identified. Both age groups had highest recognition rates of facial expressions in the congruent context, followed by the neutral context, and recognition rates in the incongruent context were worst. These context effects were more pronounced for older adults. Compared to younger adults, older adults exhibited a greater benefit from congruent contextual information, regardless of facial expression. Context also influenced the pattern of visual scanning characteristics of emotional faces in a similar manner across age groups. In addition, older adults initially attended more to context overall. Our data highlight the importance of considering the role of context in understanding emotion recognition in adulthood. PMID:23163713

  5. Hybrid generative-discriminative approach to age-invariant face recognition

    Science.gov (United States)

    Sajid, Muhammad; Shafique, Tamoor

    2018-03-01

    Age-invariant face recognition is still a challenging research problem due to the complex aging process involving types of facial tissues, skin, fat, muscles, and bones. Most of the related studies that have addressed the aging problem are focused on generative representation (aging simulation) or discriminative representation (feature-based approaches). Designing an appropriate hybrid approach taking into account both the generative and discriminative representations for age-invariant face recognition remains an open problem. We perform a hybrid matching to achieve robustness to aging variations. This approach automatically segments the eyes, nose-bridge, and mouth regions, which are relatively less sensitive to aging variations compared with the rest of the facial regions that are age-sensitive. The aging variations of age-sensitive facial parts are compensated using a demographic-aware generative model based on a bridged denoising autoencoder. The age-insensitive facial parts are represented by pixel average vector-based local binary patterns. Deep convolutional neural networks are used to extract relative features of age-sensitive and age-insensitive facial parts. Finally, the feature vectors of age-sensitive and age-insensitive facial parts are fused to achieve the recognition results. Extensive experimental results on morphological face database II (MORPH II), face and gesture recognition network (FG-NET), and Verification Subset of cross-age celebrity dataset (CACD-VS) demonstrate the effectiveness of the proposed method for age-invariant face recognition well.

  6. Deficits in Facial Emotion Recognition Indicate Behavioral Changes and Impaired Self-Awareness after Moderate to Severe Traumatic Brain Injury

    OpenAIRE

    Spikman, Jacoba M.; Milders, Maarten V.; Visser-Keizer, Annemarie C.; Westerhof-Evers, Herma J.; Herben-Dekker, Meike; van der Naalt, Joukje

    2013-01-01

    Traumatic brain injury (TBI) is a leading cause of disability, specifically among younger adults. Behavioral changes are common after moderate to severe TBI and have adverse consequences for social and vocational functioning. It is hypothesized that deficits in social cognition, including facial affect recognition, might underlie these behavioral changes. Measurement of behavioral deficits is complicated, because the rating scales used rely on subjective judgement, often lack specificity and ...

  7. Face memory and face recognition in children and adolescents with attention deficit hyperactivity disorder: A systematic review.

    Science.gov (United States)

    Romani, Maria; Vigliante, Miriam; Faedda, Noemi; Rossetti, Serena; Pezzuti, Lina; Guidetti, Vincenzo; Cardona, Francesco

    2018-06-01

    This review focuses on facial recognition abilities in children and adolescents with attention deficit hyperactivity disorder (ADHD). A systematic review, using PRISMA guidelines, was conducted to identify original articles published prior to May 2017 pertaining to memory, face recognition, affect recognition, facial expression recognition and recall of faces in children and adolescents with ADHD. The qualitative synthesis based on different studies shows a particular focus of the research on facial affect recognition without paying similar attention to the structural encoding of facial recognition. In this review, we further investigate facial recognition abilities in children and adolescents with ADHD, providing synthesis of the results observed in the literature, while detecting face recognition tasks used on face processing abilities in ADHD and identifying aspects not yet explored. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Serotonin transporter gene-linked polymorphism affects detection of facial expressions.

    Directory of Open Access Journals (Sweden)

    Ai Koizumi

    Full Text Available Previous studies have demonstrated that the serotonin transporter gene-linked polymorphic region (5-HTTLPR affects the recognition of facial expressions and attention to them. However, the relationship between 5-HTTLPR and the perceptual detection of others' facial expressions, the process which takes place prior to emotional labeling (i.e., recognition, is not clear. To examine whether the perceptual detection of emotional facial expressions is influenced by the allelic variation (short/long of 5-HTTLPR, happy and sad facial expressions were presented at weak and mid intensities (25% and 50%. Ninety-eight participants, genotyped for 5-HTTLPR, judged whether emotion in images of faces was present. Participants with short alleles showed higher sensitivity (d' to happy than to sad expressions, while participants with long allele(s showed no such positivity advantage. This effect of 5-HTTLPR was found at different facial expression intensities among males and females. The results suggest that at the perceptual stage, a short allele enhances the processing of positive facial expressions rather than that of negative facial expressions.

  9. Brain Network Involved in the Recognition of Facial Expressions of Emotion in the Early Blind

    Directory of Open Access Journals (Sweden)

    Ryo Kitada

    2011-10-01

    Full Text Available Previous studies suggest that the brain network responsible for the recognition of facial expressions of emotion (FEEs begins to emerge early in life. However, it has been unclear whether visual experience of faces is necessary for the development of this network. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI experiments to test the hypothesis that the brain network underlying the recognition of FEEs is not dependent on visual experience of faces. Early-blind, late-blind and sighted subjects participated in the psychophysical experiment. Regardless of group, subjects haptically identified basic FEEs at above-chance levels, without any feedback training. In the subsequent fMRI experiment, the early-blind and sighted subjects haptically identified facemasks portraying three different FEEs and casts of three different shoe types. The sighted subjects also completed a visual task that compared the same stimuli. Within the brain regions activated by the visually-identified FEEs (relative to shoes, haptic identification of FEEs (relative to shoes by the early-blind and sighted individuals activated the posterior middle temporal gyrus adjacent to the superior temporal sulcus, the inferior frontal gyrus, and the fusiform gyrus. Collectively, these results suggest that the brain network responsible for FEE recognition can develop without any visual experience of faces.

  10. Ethical aspects of face recognition systems in public places.

    NARCIS (Netherlands)

    Brey, Philip A.E.

    2004-01-01

    This essay examines ethical aspects of the use of facial recognition technology for surveillance purposes in public and semipublic areas, focusing particularly on the balance between security and privacy and civil liberties. As a case study, the FaceIt facial recognition engine of Identix

  11. Similarity measures for face recognition

    CERN Document Server

    Vezzetti, Enrico

    2015-01-01

    Face recognition has several applications, including security, such as (authentication and identification of device users and criminal suspects), and in medicine (corrective surgery and diagnosis). Facial recognition programs rely on algorithms that can compare and compute the similarity between two sets of images. This eBook explains some of the similarity measures used in facial recognition systems in a single volume. Readers will learn about various measures including Minkowski distances, Mahalanobis distances, Hansdorff distances, cosine-based distances, among other methods. The book also summarizes errors that may occur in face recognition methods. Computer scientists "facing face" and looking to select and test different methods of computing similarities will benefit from this book. The book is also useful tool for students undertaking computer vision courses.

  12. Individual Differences in the Speed of Facial Emotion Recognition Show Little Specificity but Are Strongly Related with General Mental Speed: Psychometric, Neural and Genetic Evidence

    Directory of Open Access Journals (Sweden)

    Xinyang Liu

    2017-08-01

    Full Text Available Facial identity and facial expression processing are crucial socio-emotional abilities but seem to show only limited psychometric uniqueness when the processing speed is considered in easy tasks. We applied a comprehensive measurement of processing speed and contrasted performance specificity in socio-emotional, social and non-social stimuli from an individual differences perspective. Performance in a multivariate task battery could be best modeled by a general speed factor and a first-order factor capturing some specific variance due to processing emotional facial expressions. We further tested equivalence of the relationships between speed factors and polymorphisms of dopamine and serotonin transporter genes. Results show that the speed factors are not only psychometrically equivalent but invariant in their relation with the Catechol-O-Methyl-Transferase (COMT Val158Met polymorphism. However, the 5-HTTLPR/rs25531 serotonin polymorphism was related with the first-order factor of emotion perception speed, suggesting a specific genetic correlate of processing emotions. We further investigated the relationship between several components of event-related brain potentials with psychometric abilities, and tested emotion specific individual differences at the neurophysiological level. Results revealed swifter emotion perception abilities to go along with larger amplitudes of the P100 and the Early Posterior Negativity (EPN, when emotion processing was modeled on its own. However, after partialling out the shared variance of emotion perception speed with general processing speed-related abilities, brain-behavior relationships did not remain specific for emotion. Together, the present results suggest that speed abilities are strongly interrelated but show some specificity for emotion processing speed at the psychometric level. At both genetic and neurophysiological levels, emotion specificity depended on whether general cognition is taken into account

  13. Individual Differences in the Speed of Facial Emotion Recognition Show Little Specificity but Are Strongly Related with General Mental Speed: Psychometric, Neural and Genetic Evidence

    Science.gov (United States)

    Liu, Xinyang; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Cai, Xinxia; Wilhelm, Oliver

    2017-01-01

    Facial identity and facial expression processing are crucial socio-emotional abilities but seem to show only limited psychometric uniqueness when the processing speed is considered in easy tasks. We applied a comprehensive measurement of processing speed and contrasted performance specificity in socio-emotional, social and non-social stimuli from an individual differences perspective. Performance in a multivariate task battery could be best modeled by a general speed factor and a first-order factor capturing some specific variance due to processing emotional facial expressions. We further tested equivalence of the relationships between speed factors and polymorphisms of dopamine and serotonin transporter genes. Results show that the speed factors are not only psychometrically equivalent but invariant in their relation with the Catechol-O-Methyl-Transferase (COMT) Val158Met polymorphism. However, the 5-HTTLPR/rs25531 serotonin polymorphism was related with the first-order factor of emotion perception speed, suggesting a specific genetic correlate of processing emotions. We further investigated the relationship between several components of event-related brain potentials with psychometric abilities, and tested emotion specific individual differences at the neurophysiological level. Results revealed swifter emotion perception abilities to go along with larger amplitudes of the P100 and the Early Posterior Negativity (EPN), when emotion processing was modeled on its own. However, after partialling out the shared variance of emotion perception speed with general processing speed-related abilities, brain-behavior relationships did not remain specific for emotion. Together, the present results suggest that speed abilities are strongly interrelated but show some specificity for emotion processing speed at the psychometric level. At both genetic and neurophysiological levels, emotion specificity depended on whether general cognition is taken into account or not. These

  14. Perceived Parenting Mediates Serotonin Transporter Gene (5-HTTLPR) and Neural System Function during Facial Recognition: A Pilot Study

    Science.gov (United States)

    Nishikawa, Saori

    2015-01-01

    This study examined changes in prefrontal oxy-Hb levels measured by NIRS (Near-Infrared Spectroscopy) during a facial-emotion recognition task in healthy adults, testing a mediational/moderational model of these variables. Fifty-three healthy adults (male = 35, female = 18) aged between 22 to 37 years old (mean age = 24.05 years old) provided saliva samples, completed a EMBU questionnaire (Swedish acronym for Egna Minnen Beträffande Uppfostran [My memories of upbringing]), and participated in a facial-emotion recognition task during NIRS recording. There was a main effect of maternal rejection on RoxH (right frontal activation during an ambiguous task), and a gene × environment (G×E) interaction on RoxH, suggesting that individuals who carry the SL or LL genotype and who endorse greater perceived maternal rejection show less right frontal activation than SL/LL carriers with lower perceived maternal rejection. Finally, perceived parenting style played a mediating role in right frontal activation via the 5-HTTLPR genotype. Early-perceived parenting might influence neural activity in an uncertain situation i.e. rating ambiguous faces among individuals with certain genotypes. This preliminary study makes a small contribution to the mapping of an influence of gene and behaviour on the neural system. More such attempts should be made in order to clarify the links. PMID:26418317

  15. Perceived Parenting Mediates Serotonin Transporter Gene (5-HTTLPR and Neural System Function during Facial Recognition: A Pilot Study.

    Directory of Open Access Journals (Sweden)

    Saori Nishikawa

    Full Text Available This study examined changes in prefrontal oxy-Hb levels measured by NIRS (Near-Infrared Spectroscopy during a facial-emotion recognition task in healthy adults, testing a mediational/moderational model of these variables. Fifty-three healthy adults (male = 35, female = 18 aged between 22 to 37 years old (mean age = 24.05 years old provided saliva samples, completed a EMBU questionnaire (Swedish acronym for Egna Minnen Beträffande Uppfostran [My memories of upbringing], and participated in a facial-emotion recognition task during NIRS recording. There was a main effect of maternal rejection on RoxH (right frontal activation during an ambiguous task, and a gene × environment (G × E interaction on RoxH, suggesting that individuals who carry the SL or LL genotype and who endorse greater perceived maternal rejection show less right frontal activation than SL/LL carriers with lower perceived maternal rejection. Finally, perceived parenting style played a mediating role in right frontal activation via the 5-HTTLPR genotype. Early-perceived parenting might influence neural activity in an uncertain situation i.e. rating ambiguous faces among individuals with certain genotypes. This preliminary study makes a small contribution to the mapping of an influence of gene and behaviour on the neural system. More such attempts should be made in order to clarify the links.

  16. Composite multilobe descriptors for cross-spectral recognition of full and partial face

    Science.gov (United States)

    Cao, Zhicheng; Schmid, Natalia A.; Bourlai, Thirimachos

    2016-08-01

    Cross-spectral image matching is a challenging research problem motivated by various applications, including surveillance, security, and identity management in general. An example of this problem includes cross-spectral matching of active infrared (IR) or thermal IR face images against a dataset of visible light images. A summary of recent developments in the field of cross-spectral face recognition by the authors is presented. In particular, it describes the original form and two variants of a local operator named composite multilobe descriptor (CMLD) for facial feature extraction with the purpose of cross-spectral matching of near-IR, short-wave IR, mid-wave IR, and long-wave IR to a gallery of visible light images. The experiments demonstrate that the variants of CMLD outperform the original CMLD and other recently developed composite operators used for comparison. In addition to different IR spectra, various standoff distances from close-up (1.5 m) to intermediate (50 m) and long (106 m) are also investigated. Performance of CMLD I to III is evaluated for each of the three cases of distances. The newly developed operators, CMLD I to III, are further utilized to conduct a study on cross-spectral partial face recognition where different facial regions are compared in terms of the amount of useful information they contain for the purpose of conducting cross-spectral face recognition. The experimental results show that among three facial regions considered in the experiments the eye region is the most informative for all IR spectra at all standoff distances.

  17. Recognition of emotional facial expressions and broad autism phenotype in parents of children diagnosed with autistic spectrum disorder.

    Science.gov (United States)

    Kadak, Muhammed Tayyib; Demirel, Omer Faruk; Yavuz, Mesut; Demir, Türkay

    2014-07-01

    Research findings debate about features of broad autism phenotype. In this study, we tested whether parents of children with autism have problems recognizing emotional facial expression and the contribution of such an impairment to the broad phenotype of autism. Seventy-two parents of children with autistic spectrum disorder and 38 parents of control group participated in the study. Broad autism features was measured with Autism Quotient (AQ). Recognition of Emotional Face Expression Test was assessed with the Emotion Recognition Test, consisting a set of photographs from Ekman & Friesen's. In a two-tailed analysis of variance of AQ, there was a significant difference for social skills (F(1, 106)=6.095; p<.05). Analyses of variance revealed significant difference in the recognition of happy, surprised and neutral expressions (F(1, 106)=4.068, p=.046; F(1, 106)=4.068, p=.046; F(1, 106)=6.064, p=.016). According to our findings, social impairment could be considered a characteristic feature of BAP. ASD parents had difficulty recognizing neutral expressions, suggesting that ASD parents may have impaired recognition of ambiguous expressions as do autistic children. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Decoding facial expressions based on face-selective and motion-sensitive areas.

    Science.gov (United States)

    Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin

    2017-06-01

    Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  19. Automatic recognition of emotions from facial expressions

    Science.gov (United States)

    Xue, Henry; Gertner, Izidor

    2014-06-01

    In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).

  20. Sibling recognition and the development of identity: intersubjective consequences of sibling differentiation in the sister relationship.

    Science.gov (United States)

    Vivona, Jeanine M

    2013-01-01

    Identity is, among other things, a means to adapt to the others around whom one must fit. Psychoanalytic theory has highlighted ways in which the child fits in by emulating important others, especially through identification. Alternately, the child may fit into the family and around important others through differentiation, an unconscious process that involves developing or accentuating qualities and desires in oneself that are expressly different from the perceived qualities of another person and simultaneously suppressing qualities and desires that are perceived as similar. With two clinical vignettes centered on the sister relationship, the author demonstrates that recognition of identity differences that result from sibling differentiation carries special significance in the sibling relationship and simultaneously poses particular intersubjective challenges. To the extent that the spotlight of sibling recognition delimits the lateral space one may occupy, repeatedly frustrated desires for sibling recognition may have enduring consequences for one's sense of self-worth and expectations of relationships with peers and partners.

  1. Spoofing detection on facial images recognition using LBP and GLCM combination

    Science.gov (United States)

    Sthevanie, F.; Ramadhani, K. N.

    2018-03-01

    The challenge for the facial based security system is how to detect facial image falsification such as facial image spoofing. Spoofing occurs when someone try to pretend as a registered user to obtain illegal access and gain advantage from the protected system. This research implements facial image spoofing detection method by analyzing image texture. The proposed method for texture analysis combines the Local Binary Pattern (LBP) and Gray Level Co-occurrence Matrix (GLCM) method. The experimental results show that spoofing detection using LBP and GLCM combination achieves high detection rate compared to that of using only LBP feature or GLCM feature.

  2. Understanding the mechanisms of familiar voice-identity recognition in the human brain.

    Science.gov (United States)

    Maguinness, Corrina; Roswandowitz, Claudia; von Kriegstein, Katharina

    2018-03-31

    Humans have a remarkable skill for voice-identity recognition: most of us can remember many voices that surround us as 'unique'. In this review, we explore the computational and neural mechanisms which may support our ability to represent and recognise a unique voice-identity. We examine the functional architecture of voice-sensitive regions in the superior temporal gyrus/sulcus, and bring together findings on how these regions may interact with each other, and additional face-sensitive regions, to support voice-identity processing. We also contrast findings from studies on neurotypicals and clinical populations which have examined the processing of familiar and unfamiliar voices. Taken together, the findings suggest that representations of familiar and unfamiliar voices might dissociate in the human brain. Such an observation does not fit well with current models for voice-identity processing, which by-and-large assume a common sequential analysis of the incoming voice signal, regardless of voice familiarity. We provide a revised audio-visual integrative model of voice-identity processing which brings together traditional and prototype models of identity processing. This revised model includes a mechanism of how voice-identity representations are established and provides a novel framework for understanding and examining the potential differences in familiar and unfamiliar voice processing in the human brain. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. [Face recognition in patients with schizophrenia].

    Science.gov (United States)

    Doi, Hirokazu; Shinohara, Kazuyuki

    2012-07-01

    It is well known that patients with schizophrenia show severe deficiencies in social communication skills. These deficiencies are believed to be partly derived from abnormalities in face recognition. However, the exact nature of these abnormalities exhibited by schizophrenic patients with respect to face recognition has yet to be clarified. In the present paper, we review the main findings on face recognition deficiencies in patients with schizophrenia, particularly focusing on abnormalities in the recognition of facial expression and gaze direction, which are the primary sources of information of others' mental states. The existing studies reveal that the abnormal recognition of facial expression and gaze direction in schizophrenic patients is attributable to impairments in both perceptual processing of visual stimuli, and cognitive-emotional responses to social information. Furthermore, schizophrenic patients show malfunctions in distributed neural regions, ranging from the fusiform gyrus recruited in the structural encoding of facial stimuli, to the amygdala which plays a primary role in the detection of the emotional significance of stimuli. These findings were obtained from research in patient groups with heterogeneous characteristics. Because previous studies have indicated that impairments in face recognition in schizophrenic patients might vary according to the types of symptoms, it is of primary importance to compare the nature of face recognition deficiencies and the impairments of underlying neural functions across sub-groups of patients.

  4. Analysis of Facial Expression by Taste Stimulation

    Science.gov (United States)

    Tobitani, Kensuke; Kato, Kunihito; Yamamoto, Kazuhiko

    In this study, we focused on the basic taste stimulation for the analysis of real facial expressions. We considered that the expressions caused by taste stimulation were unaffected by individuality or emotion, that is, such expressions were involuntary. We analyzed the movement of facial muscles by taste stimulation and compared real expressions with artificial expressions. From the result, we identified an obvious difference between real and artificial expressions. Thus, our method would be a new approach for facial expression recognition.

  5. Gender and the capacity to identify facial emotional expressions

    Directory of Open Access Journals (Sweden)

    Carolina Baptista Menezes

    Full Text Available Recognizing emotional expressions is enabled by a fundamental sociocognitive mechanism of human nature. This study compared 114 women and 104 men on the identification of basic emotions on a recognition task that used culturally adapted and validated faces to the Brazilian context. It was also investigated whether gender differences on emotion recognition would vary according to different exposure times. Women were generally better at detecting facial expressions, but an interaction suggested that the female superiority was particularly observed for anger, disgust, and surprise; results did not change according to age or time exposure. However, regardless of sex, total accuracy improved as presentation times increased, but only fear and anger significantly differed between the presentation times. Hence, in addition to the support of the evolutionary hypothesis of the female superiority in detecting facial expressions of emotions, recognition of facial expressions also depend on the time available to correctly identify an expression.

  6. Evolution of facial color pattern complexity in lemurs

    OpenAIRE

    Rakotonirina, Hanitriniaina; Kappeler, Peter M.; Fichtel, Claudia

    2017-01-01

    Interspecific variation in facial color patterns across New and Old World primates has been linked to species recognition and group size. Because group size has opposite effects on interspecific variation in facial color patterns in these two radiations, a study of the third large primate radiation may shed light on convergences and divergences in this context. We therefore compiled published social and ecological data and analyzed facial photographs of 65 lemur species to categorize variatio...

  7. REAL-TIME FACE RECOGNITION BASED ON OPTICAL FLOW AND HISTOGRAM EQUALIZATION

    Directory of Open Access Journals (Sweden)

    D. Sathish Kumar

    2013-05-01

    Full Text Available Face recognition is one of the intensive areas of research in computer vision and pattern recognition but many of which are focused on recognition of faces under varying facial expressions and pose variation. A constrained optical flow algorithm discussed in this paper, recognizes facial images involving various expressions based on motion vector computation. In this paper, an optical flow computation algorithm which computes the frames of varying facial gestures, and integrating with synthesized image in a probabilistic environment has been proposed. Also Histogram Equalization technique has been used to overcome the effect of illuminations while capturing the input data using camera devices. It also enhances the contrast of the image for better processing. The experimental results confirm that the proposed face recognition system is more robust and recognizes the facial images under varying expressions and pose variations more accurately.

  8. 3D Facial Pattern Analysis for Autism

    Science.gov (United States)

    2010-07-01

    et al. (2001) proposed a two-level Garbor wavelet network (GWN) to detect eight facial features. In Bhuiyan et al. (2003) six facial features are...Toyama, K., Krüger, V., 2001. Hierarchical Wavelet Networks for Facial Feature Localization. ICCV’01 Workshop on Recognition, Analysis and... pathological  (red) and normal structure (blue) (b)  signed distance map (negative distance indicates the  pathological  shape is inside) (c) raw

  9. System for face recognition under expression variations of neutral-sampled individuals using recognized expression warping and a virtual expression-face database

    Science.gov (United States)

    Petpairote, Chayanut; Madarasmi, Suthep; Chamnongthai, Kosin

    2018-01-01

    The practical identification of individuals using facial recognition techniques requires the matching of faces with specific expressions to faces from a neutral face database. A method for facial recognition under varied expressions against neutral face samples of individuals via recognition of expression warping and the use of a virtual expression-face database is proposed. In this method, facial expressions are recognized and the input expression faces are classified into facial expression groups. To aid facial recognition, the virtual expression-face database is sorted into average facial-expression shapes and by coarse- and fine-featured facial textures. Wrinkle information is also employed in classification by using a process of masking to adjust input faces to match the expression-face database. We evaluate the performance of the proposed method using the CMU multi-PIE, Cohn-Kanade, and AR expression-face databases, and we find that it provides significantly improved results in terms of face recognition accuracy compared to conventional methods and is acceptable for facial recognition under expression variation.

  10. Communal and agentic behaviour in response to facial emotion expressions

    NARCIS (Netherlands)

    aan het Rot, Marije; Hogenelst, Koen; Gesing, Christina M

    Facial emotions are important for human communication. Unfortunately, traditional facial emotion recognition tasks do not inform about how respondents might behave towards others expressing certain emotions. Approach-avoidance tasks do measure behaviour, but only on one dimension. In this study 81

  11. Facial Expression Emotion Detection for Real-Time Embedded Systems

    Directory of Open Access Journals (Sweden)

    Saeed Turabzadeh

    2018-01-01

    Full Text Available Recently, real-time facial expression recognition has attracted more and more research. In this study, an automatic facial expression real-time system was built and tested. Firstly, the system and model were designed and tested on a MATLAB environment followed by a MATLAB Simulink environment that is capable of recognizing continuous facial expressions in real-time with a rate of 1 frame per second and that is implemented on a desktop PC. They have been evaluated in a public dataset, and the experimental results were promising. The dataset and labels used in this study were made from videos, which were recorded twice from five participants while watching a video. Secondly, in order to implement in real-time at a faster frame rate, the facial expression recognition system was built on the field-programmable gate array (FPGA. The camera sensor used in this work was a Digilent VmodCAM — stereo camera module. The model was built on the Atlys™ Spartan-6 FPGA development board. It can continuously perform emotional state recognition in real-time at a frame rate of 30. A graphical user interface was designed to display the participant’s video in real-time and two-dimensional predict labels of the emotion at the same time.

  12. Facial expression system on video using widrow hoff

    Science.gov (United States)

    Jannah, M.; Zarlis, M.; Mawengkang, H.

    2018-03-01

    Facial expressions recognition is one of interesting research. This research contains human feeling to computer application Such as the interaction between human and computer, data compression, facial animation and facial detection from the video. The purpose of this research is to create facial expression system that captures image from the video camera. The system in this research uses Widrow-Hoff learning method in training and testing image with Adaptive Linear Neuron (ADALINE) approach. The system performance is evaluated by two parameters, detection rate and false positive rate. The system accuracy depends on good technique and face position that trained and tested.

  13. Robust 3D Face Recognition in the Presence of Realistic Occlusions

    NARCIS (Netherlands)

    Alyuz, Nese; Gökberk, B.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.; Akarun, Lale

    2012-01-01

    Facial occlusions pose significant problems for automatic face recognition systems. In this work, we propose a novel occlusion-resistant three-dimensional (3D) facial identification system. We show that, under extreme occlusions due to hair, hands, and eyeglasses, typical 3D face recognition systems

  14. Designing a Low-Resolution Face Recognition System for Long-Range Surveillance

    NARCIS (Netherlands)

    Peng, Y.; Spreeuwers, Lieuwe Jan; Veldhuis, Raymond N.J.

    2016-01-01

    Most face recognition systems deal well with high-resolution facial images, but perform much worse on low-resolution facial images. In low-resolution face recognition, there is a specific but realistic surveillance scenario: a surveillance camera monitoring a large area. In this scenario, usually

  15. The correlates of subjective perception of identity and expression in the face network: an fMRI adaptation study.

    Science.gov (United States)

    Fox, Christopher J; Moon, So Young; Iaria, Giuseppe; Barton, Jason J S

    2009-01-15

    The recognition of facial identity and expression are distinct tasks, with current models hypothesizing anatomic segregation of processing within a face-processing network. Using fMRI adaptation and a region-of-interest approach, we assessed how the perception of identity and expression changes in morphed stimuli affected the signal within this network, by contrasting (a) changes that crossed categorical boundaries of identity or expression with those that did not, and (b) changes that subjects perceived as causing identity or expression to change, versus changes that they perceived as not affecting the category of identity or expression. The occipital face area (OFA) was sensitive to any structural change in a face, whether it was identity or expression, but its signal did not correlate with whether subjects perceived a change or not. Both the fusiform face area (FFA) and the posterior superior temporal sulcus (pSTS) showed release from adaptation when subjects perceived a change in either identity or expression, although in the pSTS this effect only occurred when subjects were explicitly attending to expression. The middle superior temporal sulcus (mSTS) showed release from adaptation for expression only, and the precuneus for identity only. The data support models where the OFA is involved in the early perception of facial structure. However, evidence for a functional overlap in the FFA and pSTS, with both identity and expression signals in both areas, argues against a complete independence of identity and expression processing in these regions of the core face-processing network.

  16. Processing environmental stimuli in paranoid schizophrenia: recognizing facial emotions and performing executive functions.

    Science.gov (United States)

    Yu, Shao Hua; Zhu, Jun Peng; Xu, You; Zheng, Lei Lei; Chai, Hao; He, Wei; Liu, Wei Bo; Li, Hui Chun; Wang, Wei

    2012-12-01

    To study the contribution of executive function to abnormal recognition of facial expressions of emotion in schizophrenia patients. Abnormal recognition of facial expressions of emotion was assayed according to Japanese and Caucasian facial expressions of emotion (JACFEE), Wisconsin card sorting test (WCST), positive and negative symptom scale, and Hamilton anxiety and depression scale, respectively, in 88 paranoid schizophrenia patients and 75 healthy volunteers. Patients scored higher on the Positive and Negative Symptom Scale and the Hamilton Anxiety and Depression Scales, displayed lower JACFEE recognition accuracies and poorer WCST performances. The JACFEE recognition accuracy of contempt and disgust was negatively correlated with the negative symptom scale score while the recognition accuracy of fear was positively with the positive symptom scale score and the recognition accuracy of surprise was negatively with the general psychopathology score in patients. Moreover, the WCST could predict the JACFEE recognition accuracy of contempt, disgust, and sadness in patients, and the perseverative errors negatively predicted the recognition accuracy of sadness in healthy volunteers. The JACFEE recognition accuracy of sadness could predict the WCST categories in paranoid schizophrenia patients. Recognition accuracy of social-/moral emotions, such as contempt, disgust and sadness is related to the executive function in paranoid schizophrenia patients, especially when regarding sadness. Copyright © 2012 The Editorial Board of Biomedical and Environmental Sciences. Published by Elsevier B.V. All rights reserved.

  17. Differential effects of spaced vs. massed training in long-term object-identity and object-location recognition memory.

    Science.gov (United States)

    Bello-Medina, Paola C; Sánchez-Carrasco, Livia; González-Ornelas, Nadia R; Jeffery, Kathryn J; Ramírez-Amaya, Víctor

    2013-08-01

    Here we tested whether the well-known superiority of spaced training over massed training is equally evident in both object identity and object location recognition memory. We trained animals with objects placed in a variable or in a fixed location to produce a location-independent object identity memory or a location-dependent object representation. The training consisted of 5 trials that occurred either on one day (Massed) or over the course of 5 consecutive days (Spaced). The memory test was done in independent groups of animals either 24h or 7 days after the last training trial. In each test the animals were exposed to either a novel object, when trained with the objects in variable locations, or to a familiar object in a novel location, when trained with objects in fixed locations. The difference in time spent exploring the changed versus the familiar objects was used as a measure of recognition memory. For the object-identity-trained animals, spaced training produced clear evidence of recognition memory after both 24h and 7 days, but massed-training animals showed it only after 24h. In contrast, for the object-location-trained animals, recognition memory was evident after both retention intervals and with both training procedures. When objects were placed in variable locations for the two types of training and the test was done with a brand-new location, only the spaced-training animals showed recognition at 24h, but surprisingly, after 7 days, animals trained using both procedures were able to recognize the change, suggesting a post-training consolidation process. We suggest that the two training procedures trigger different neural mechanisms that may differ in the two segregated streams that process object information and that may consolidate differently. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Autistic traits are linked to reduced adaptive coding of face identity and selectively poorer face recognition in men but not women.

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Ewing, Louise

    2013-11-01

    Our ability to discriminate and recognize thousands of faces despite their similarity as visual patterns relies on adaptive, norm-based, coding mechanisms that are continuously updated by experience. Reduced adaptive coding of face identity has been proposed as a neurocognitive endophenotype for autism, because it is found in autism and in relatives of individuals with autism. Autistic traits can also extend continuously into the general population, raising the possibility that reduced adaptive coding of face identity may be more generally associated with autistic traits. In the present study, we investigated whether adaptive coding of face identity decreases as autistic traits increase in an undergraduate population. Adaptive coding was measured using face identity aftereffects, and autistic traits were measured using the Autism-Spectrum Quotient (AQ) and its subscales. We also measured face and car recognition ability to determine whether autistic traits are selectively related to face recognition difficulties. We found that men who scored higher on levels of autistic traits related to social interaction had reduced adaptive coding of face identity. This result is consistent with the idea that atypical adaptive face-coding mechanisms are an endophenotype for autism. Autistic traits were also linked with face-selective recognition difficulties in men. However, there were some unexpected sex differences. In women, autistic traits were linked positively, rather than negatively, with adaptive coding of identity, and were unrelated to face-selective recognition difficulties. These sex differences indicate that autistic traits can have different neurocognitive correlates in men and women and raise the intriguing possibility that endophenotypes of autism can differ in males and females. © 2013 Elsevier Ltd. All rights reserved.

  19. Weighted Feature Gaussian Kernel SVM for Emotion Recognition.

    Science.gov (United States)

    Wei, Wei; Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods.

  20. The Effect of Mozart Music on Child Facial Expression Recognition%莫扎特音乐对幼儿表情识别能力的影响

    Institute of Scientific and Technical Information of China (English)

    王玲; 赵蕾; 卢英俊

    2012-01-01

    本研究考察莫扎特音乐以及不同诱发唤醒度和不同情绪类型的音乐对3-5岁幼儿面部表情(高兴、悲伤和中性表情)识别的影响。结果表明:与同是高唤醒度正性情绪的音乐相比,具有高结构性和周期性的莫扎特音乐反而会对幼儿的表情识别产生干扰;而聆听低唤醒度负性情绪的音乐有利于幼儿大脑达到适当的觉醒水平,进入适当的情绪状态,从而对其表情识别产生促进作用。%We studied 3-5-year-old children's facial expression (happy, sad and neutral) recognition in response to Mozart music as well as to music with different arousal degrees and emotional types. The results showed: compared with music with high arousal degree and positive emotion, Mozart music, with high structural and cyclical features, interfered children's facial expression recognition; while listening to music with low arousal degree and negative emotion helped children's brain reach a proper state for suitable emotion, therefore promoted facial recognition.

  1. A new look at emotion perception: Concepts speed and shape facial emotion recognition.

    Science.gov (United States)

    Nook, Erik C; Lindquist, Kristen A; Zaki, Jamil

    2015-10-01

    Decades ago, the "New Look" movement challenged how scientists thought about vision by suggesting that conceptual processes shape visual perceptions. Currently, affective scientists are likewise debating the role of concepts in emotion perception. Here, we utilized a repetition-priming paradigm in conjunction with signal detection and individual difference analyses to examine how providing emotion labels-which correspond to discrete emotion concepts-affects emotion recognition. In Study 1, pairing emotional faces with emotion labels (e.g., "sad") increased individuals' speed and sensitivity in recognizing emotions. Additionally, individuals with alexithymia-who have difficulty labeling their own emotions-struggled to recognize emotions based on visual cues alone, but not when emotion labels were provided. Study 2 replicated these findings and further demonstrated that emotion concepts can shape perceptions of facial expressions. Together, these results suggest that emotion perception involves conceptual processing. We discuss the implications of these findings for affective, social, and clinical psychology. (c) 2015 APA, all rights reserved).

  2. Forensic Face Recognition: A Survey

    NARCIS (Netherlands)

    Ali, Tauseef; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2010-01-01

    Beside a few papers which focus on the forensic aspects of automatic face recognition, there is not much published about it in contrast to the literature on developing new techniques and methodologies for biometric face recognition. In this report, we review forensic facial identification which is

  3. A Web-based Game for Teaching Facial Expressions to Schizophrenic Patients.

    Science.gov (United States)

    Gülkesen, Kemal Hakan; Isleyen, Filiz; Cinemre, Buket; Samur, Mehmet Kemal; Sen Kaya, Semiha; Zayim, Nese

    2017-07-12

    Recognizing facial expressions is an important social skill. In some psychological disorders such as schizophrenia, loss of this skill may complicate the patient's daily life. Prior research has shown that information technology may help to develop facial expression recognition skills through educational software and games. To examine if a computer game designed for teaching facial expressions would improve facial expression recognition skills of patients with schizophrenia. We developed a website composed of eight serious games. Thirty-two patients were given a pre-test composed of 21 facial expression photographs. Eighteen patients were in the study group while 14 were in the control group. Patients in the study group were asked to play the games on the website. After a period of one month, we performed a post-test for all patients. The median score of the correct answers was 17.5 in the control group whereas it was 16.5 in the study group (of 21) in pretest. The median post-test score was 18 in the control group (p=0.052) whereas it was 20 in the study group (pgames may be used for the purpose of educating people who have difficulty in recognizing facial expressions.

  4. Multivariate Pattern Classification of Facial Expressions Based on Large-Scale Functional Connectivity.

    Science.gov (United States)

    Liang, Yin; Liu, Baolin; Li, Xianglin; Wang, Peiyuan

    2018-01-01

    It is an important question how human beings achieve efficient recognition of others' facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition.

  5. Four not six: Revealing culturally common facial expressions of emotion.

    Science.gov (United States)

    Jack, Rachael E; Sun, Wei; Delis, Ioannis; Garrod, Oliver G B; Schyns, Philippe G

    2016-06-01

    As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Preserving Privacy by De-identifying Facial Images

    National Research Council Canada - National Science Library

    Newton, Elaine; Sweeney, Latanya; Malin, Bradley

    2003-01-01

    .... A trivial solution to de-identifying faces involves blacking out each face. This thwarts any possible face recognition, but because all facial details are obscured, the result is of limited use...

  7. Visual Working Memory Capacity for Emotional Facial Expressions

    Directory of Open Access Journals (Sweden)

    Domagoj Švegar

    2011-12-01

    Full Text Available The capacity of visual working memory is limited to no more than four items. At the same time, it is limited not only by the number of objects, but also by the total amount of information that needs to be memorized, and the relation between the information load per object and the number of objects that can be stored into visual working memory is inverse. The objective of the present experiment was to compute visual working memory capacity for emotional facial expressions, and in order to do so, change detection tasks were applied. Pictures of human emotional facial expressions were presented to 24 participants in 1008 experimental trials, each of which began with a presentation of a fixation mark, which was followed by a short simultaneous presentation of six emotional facial expressions. After that, a blank screen was presented, and after such inter-stimulus interval, one facial expression was presented at one of previously occupied locations. Participants had to answer if the facial expression presented at test is different or identical as the expression presented at that same location before the retention interval. Memory capacity was estimated through accuracy of responding, by the formula constructed by Pashler (1988, adopted from signal detection theory. It was found that visual working memory capacity for emotional facial expressions equals 3.07, which is high compared to capacity for facial identities and other visual stimuli. The obtained results were explained within the framework of evolutionary psychology.

  8. Perceptual integration of kinematic components in the recognition of emotional facial expressions.

    Science.gov (United States)

    Chiovetto, Enrico; Curio, Cristóbal; Endres, Dominik; Giese, Martin

    2018-04-01

    According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.

  9. Perception of global facial geometry is modulated through experience

    Directory of Open Access Journals (Sweden)

    Meike Ramon

    2015-03-01

    Full Text Available Identification of personally familiar faces is highly efficient across various viewing conditions. While the presence of robust facial representations stored in memory is considered to aid this process, the mechanisms underlying invariant identification remain unclear. Two experiments tested the hypothesis that facial representations stored in memory are associated with differential perceptual processing of the overall facial geometry. Subjects who were personally familiar or unfamiliar with the identities presented discriminated between stimuli whose overall facial geometry had been manipulated to maintain or alter the original facial configuration (see Barton, Zhao & Keenan, 2003. The results demonstrate that familiarity gives rise to more efficient processing of global facial geometry, and are interpreted in terms of increased holistic processing of facial information that is maintained across viewing distances.

  10. Frontal Mucocele following Previous Facial Trauma with Hardware Reconstruction

    Directory of Open Access Journals (Sweden)

    Megan EuDaly

    2016-01-01

    Full Text Available Mucoceles are cysts that can develop after facial bone fractures, especially those involving the frontal sinuses. Despite being rare, mucoceles can result in serious delayed sequelae. We present a case of a frontal mucocele that developed two years after extensive facial trauma following a motor vehicle crash (MVC and review the emergency department (ED evaluation and treatment of mucocele. Early recognition, appropriate imaging, and an interdisciplinary approach are essential for managing these rare sequelae of facial trauma.

  11. Facial recognition and laser surface scan: a pilot study

    DEFF Research Database (Denmark)

    Lynnerup, Niels; Clausen, Maja-Lisa; Kristoffersen, Agnethe May

    2009-01-01

    Surface scanning of the face of a suspect is presented as a way to better match the facial features with those of a perpetrator from CCTV footage. We performed a simple pilot study where we obtained facial surface scans of volunteers and then in blind trials tried to match these scans with 2D...... photographs of the faces of the volunteers. Fifteen male volunteers were surface scanned using a Polhemus FastSCAN Cobra Handheld Laser Scanner. Three photographs were taken of each volunteer's face in full frontal, profile and from above at an angle of 45 degrees and also 45 degrees laterally. Via special...

  12. Development and validation of an Argentine set of facial expressions of emotion.

    Science.gov (United States)

    Vaiman, Marcelo; Wagner, Mónica Anna; Caicedo, Estefanía; Pereno, Germán Leandro

    2017-02-01

    Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.

  13. Speed and accuracy of facial expression classification in avoidant personality disorder: a preliminary study.

    Science.gov (United States)

    Rosenthal, M Zachary; Kim, Kwanguk; Herr, Nathaniel R; Smoski, Moria J; Cheavens, Jennifer S; Lynch, Thomas R; Kosson, David S

    2011-10-01

    The aim of this preliminary study was to examine whether individuals with avoidant personality disorder (APD) could be characterized by deficits in the classification of dynamically presented facial emotional expressions. Using a community sample of adults with APD (n = 17) and non-APD controls (n = 16), speed and accuracy of facial emotional expression recognition was investigated in a task that morphs facial expressions from neutral to prototypical expressions (Multi-Morph Facial Affect Recognition Task; Blair, Colledge, Murray, & Mitchell, 2001). Results indicated that individuals with APD were significantly more likely than controls to make errors when classifying fully expressed fear. However, no differences were found between groups in the speed to correctly classify facial emotional expressions. The findings are some of the first to investigate facial emotional processing in a sample of individuals with APD and point to an underlying deficit in processing social cues that may be involved in the maintenance of APD.

  14. The different faces of one's self: an fMRI study into the recognition of current and past self-facial appearances.

    Science.gov (United States)

    Apps, Matthew A J; Tajadura-Jiménez, Ana; Turley, Grainne; Tsakiris, Manos

    2012-11-15

    Mirror self-recognition is often considered as an index of self-awareness. Neuroimaging studies have identified a neural circuit specialised for the recognition of one's own current facial appearance. However, faces change considerably over a lifespan, highlighting the necessity for representations of one's face to continually be updated. We used fMRI to investigate the different neural circuits involved in the recognition of the childhood and current, adult, faces of one's self. Participants viewed images of either their own face as it currently looks morphed with the face of a familiar other or their childhood face morphed with the childhood face of the familiar other. Activity in areas which have a generalised selectivity for faces, including the inferior occipital gyrus, the superior parietal lobule and the inferior temporal gyrus, varied with the amount of current self in an image. Activity in areas involved in memory encoding and retrieval, including the hippocampus and the posterior cingulate gyrus, and areas involved in creating a sense of body ownership, including the temporo-parietal junction and the inferior parietal lobule, varied with the amount of childhood self in an image. We suggest that the recognition of one's own past or present face is underpinned by different cognitive processes in distinct neural circuits. Current self-recognition engages areas involved in perceptual face processing, whereas childhood self-recognition recruits networks involved in body ownership and memory processing. Copyright © 2012 Elsevier Inc. All rights reserved.

  15. When a smile becomes a fist: the perception of facial and bodily expressions of emotion in violent offenders

    NARCIS (Netherlands)

    Kret, M.E.; de Gelder, B.

    2013-01-01

    Previous reports have suggested an enhancement of facial expression recognition in women as compared to men. It has also been suggested that men versus women have a greater attentional bias towards angry cues. Research has shown that facial expression recognition impairments and attentional biases

  16. Temporal neural mechanisms underlying conscious access to different levels of facial stimulus contents.

    Science.gov (United States)

    Hsu, Shen-Mou; Yang, Yu-Fang

    2018-04-01

    An important issue facing the empirical study of consciousness concerns how the contents of incoming stimuli gain access to conscious processing. According to classic theories, facial stimuli are processed in a hierarchical manner. However, it remains unclear how the brain determines which level of stimulus content is consciously accessible when facing an incoming facial stimulus. Accordingly, with a magnetoencephalography technique, this study aims to investigate the temporal dynamics of the neural mechanism mediating which level of stimulus content is consciously accessible. Participants were instructed to view masked target faces at threshold so that, according to behavioral responses, their perceptual awareness alternated from consciously accessing facial identity in some trials to being able to consciously access facial configuration features but not facial identity in other trials. Conscious access at these two levels of facial contents were associated with a series of differential neural events. Before target presentation, different patterns of phase angle adjustment were observed between the two types of conscious access. This effect was followed by stronger phase clustering for awareness of facial identity immediately during stimulus presentation. After target onset, conscious access to facial identity, as opposed to facial configural features, was able to elicit more robust late positivity. In conclusion, we suggest that the stages of neural events, ranging from prestimulus to stimulus-related activities, may operate in combination to determine which level of stimulus contents is consciously accessed. Conscious access may thus be better construed as comprising various forms that depend on the level of stimulus contents accessed. NEW & NOTEWORTHY The present study investigates how the brain determines which level of stimulus contents is consciously accessible when facing an incoming facial stimulus. Using magnetoencephalography, we show that prestimulus

  17. Brief Report: Representational Momentum for Dynamic Facial Expressions in Pervasive Developmental Disorder

    Science.gov (United States)

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2010-01-01

    Individuals with pervasive developmental disorder (PDD) have difficulty with social communication via emotional facial expressions, but behavioral studies involving static images have reported inconsistent findings about emotion recognition. We investigated whether dynamic presentation of facial expression would enhance subjective perception of…

  18. The Influence of Music on Facial Emotion Recognition in Children with Autism Spectrum Disorder and Neurotypical Children.

    Science.gov (United States)

    Brown, Laura S

    2017-03-01

    Children with autism spectrum disorder (ASD) often struggle with social skills, including the ability to perceive emotions based on facial expressions. Research evidence suggests that many individuals with ASD can perceive emotion in music. Examining whether music can be used to enhance recognition of facial emotion by children with ASD would inform development of music therapy interventions. The purpose of this study was to investigate the influence of music with a strong emotional valance (happy; sad) on children with ASD's ability to label emotions depicted in facial photographs, and their response time. Thirty neurotypical children and 20 children with high-functioning ASD rated expressions of happy, neutral, and sad in 30 photographs under two music listening conditions (sad music; happy music). During each music listening condition, participants rated the 30 images using a 7-point scale that ranged from very sad to very happy. Response time data were also collected across both conditions. A significant two-way interaction revealed that participants' ratings of happy and neutral faces were unaffected by music conditions, but sad faces were perceived to be sadder with sad music than with happy music. Across both conditions, neurotypical children rated the happy faces as happier and the sad faces as sadder than did participants with ASD. Response times of the neurotypical children were consistently shorter than response times of the children with ASD; both groups took longer to rate sad faces than happy faces. Response times of neurotypical children were generally unaffected by the valence of the music condition; however, children with ASD took longer to respond when listening to sad music. Music appears to affect perceptions of emotion in children with ASD, and perceptions of sad facial expressions seem to be more affected by emotionally congruent background music than are perceptions of happy or neutral faces. © the American Music Therapy Association 2016

  19. Stereotypes and prejudice affect the recognition of emotional body postures.

    Science.gov (United States)

    Bijlstra, Gijsbert; Holland, Rob W; Dotsch, Ron; Wigboldus, Daniel H J

    2018-03-26

    Most research on emotion recognition focuses on facial expressions. However, people communicate emotional information through bodily cues as well. Prior research on facial expressions has demonstrated that emotion recognition is modulated by top-down processes. Here, we tested whether this top-down modulation generalizes to the recognition of emotions from body postures. We report three studies demonstrating that stereotypes and prejudice about men and women may affect how fast people classify various emotional body postures. Our results suggest that gender cues activate gender associations, which affect the recognition of emotions from body postures in a top-down fashion. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. Middle ear osteoma causing progressive facial nerve weakness: a case report.

    Science.gov (United States)

    Curtis, Kate; Bance, Manohar; Carter, Michael; Hong, Paul

    2014-09-18

    Facial nerve weakness is most commonly due to Bell's palsy or cerebrovascular accidents. Rarely, middle ear tumor presents with facial nerve dysfunction. We report a very unusual case of middle ear osteoma in a 49-year-old Caucasian woman causing progressive facial nerve deficit. A subtle middle ear lesion was observed on otoscopy and computed tomographic images demonstrated an osseous middle ear tumor. Complete surgical excision resulted in the partial recovery of facial nerve function. Facial nerve dysfunction is rarely caused by middle ear tumors. The weakness is typically due to a compressive effect on the middle ear portion of the facial nerve. Early recognition is crucial since removal of these lesions may lead to the recuperation of facial nerve function.

  1. Brain Structural Correlates of Emotion Recognition in Psychopaths.

    Directory of Open Access Journals (Sweden)

    Vanessa Pera-Guardiola

    Full Text Available Individuals with psychopathy present deficits in the recognition of facial emotional expressions. However, the nature and extent of these alterations are not fully understood. Furthermore, available data on the functional neural correlates of emotional face recognition deficits in adult psychopaths have provided mixed results. In this context, emotional face morphing tasks may be suitable for clarifying mild and emotion-specific impairments in psychopaths. Likewise, studies exploring corresponding anatomical correlates may be useful for disentangling available neurofunctional evidence based on the alleged neurodevelopmental roots of psychopathic traits. We used Voxel-Based Morphometry and a morphed emotional face expression recognition task to evaluate the relationship between regional gray matter (GM volumes and facial emotion recognition deficits in male psychopaths. In comparison to male healthy controls, psychopaths showed deficits in the recognition of sad, happy and fear emotional expressions. In subsequent brain imaging analyses psychopaths with better recognition of facial emotional expressions showed higher volume in the prefrontal cortex (orbitofrontal, inferior frontal and dorsomedial prefrontal cortices, somatosensory cortex, anterior insula, cingulate cortex and the posterior lobe of the cerebellum. Amygdala and temporal lobe volumes contributed to better emotional face recognition in controls only. These findings provide evidence suggesting that variability in brain morphometry plays a role in accounting for psychopaths' impaired ability to recognize emotional face expressions, and may have implications for comprehensively characterizing the empathy and social cognition dysfunctions typically observed in this population of subjects.

  2. SEARCH FOR NATIONAL SOCIOLINGUISTIC IDENTITY RECOGNITION: A DISCUSSION ON VARIABLE PHENOMENA OF BRAZILIAN PORTUGUESE

    Directory of Open Access Journals (Sweden)

    Vinícius de Lacerda

    2012-06-01

    Full Text Available Brazilian Portuguese, the national language,spoken and used in Brazil, has its socio-historical originst i ed  to European Por tugue s e .  The   e s tabl i shment  of  astandard norm (grammar took as its basis the manner ofspeaking and writing of the Portuguese. Although thedi f f e r enc e s  be twe en  the   two  language s  ar e   c l ear  andperceived by both peoples, Brazilians still learn, wrongly,rules related to the language spoken in Portugal, leavinga s i d e   f e a t u r e s   a n d  ma r k   t h a t   r e p r e s e n t   t h e   n a t i o n a lsociolinguistic identity. This research investigates andfeatures, considering the attitude of the speakers in front ofthe variable phenomena of the Portuguese language, aspectsof the Brazilian spoken language that points to possibletraces of a Brazilian sociolinguistic identity. The researchwas exploratory and quantitative, with the theoretical andmethodological model of the variationist Sociolinguistics. Linguistic recognition tests were used in order to promotethe evaluation, the recognition and the appreciation oflanguage varieties in Brazil. It was found in this work thatthe selected educated speakers showed an awareness of thee s s e n t i a l   q u e s t i o n   o f   r e c o g n i z i n g   t h i s   B r a z i l i a nsoc iol ingui s t i c   ident i t y,   e valuat ing and  judging  somevariable phenomena of Brazilian Portuguese as their closestlinguistic repertoire in less monitored speech situations.This contributes even more to an actual awareness of theexistence and recognition of a language that might beBrazilian in the future.

  3. 5-HTTLPR modulates the recognition accuracy and exploration of emotional facial expressions

    Directory of Open Access Journals (Sweden)

    Sabrina eBoll

    2014-07-01

    Full Text Available Individual genetic differences in the serotonin transporter-linked polymorphic region (5-HTTLPR have been associated with variations in the sensitivity to social and emotional cues as well as altered amygdala reactivity to facial expressions of emotion. Amygdala activation has further been shown to trigger gaze changes towards diagnostically relevant facial features. The current study examined whether altered socio-emotional reactivity in variants of the 5-HTTLPR promoter polymorphism reflects individual differences in attending to diagnostic features of facial expressions. For this purpose, visual exploration of emotional facial expressions was compared between a low (n=39 and a high (n=40 5-HTT expressing group of healthy human volunteers in an eye tracking paradigm. Emotional faces were presented while manipulating the initial fixation such that saccadic changes towards the eyes and towards the mouth could be identified. We found that the low versus the high 5-HTT group demonstrated greater accuracy with regard to emotion classifications, particularly when faces were presented for a longer duration. No group differences in gaze orientation towards diagnostic facial features could be observed. However, participants in the low 5-HTT group exhibited more and faster fixation changes for certain emotions when faces were presented for a longer duration and overall face fixation times were reduced for this genotype group. These results suggest that the 5-HTT gene influences social perception by modulating the general vigilance to social cues rather than selectively affecting the pre-attentive detection of diagnostic facial features.

  4. Towards multimodal emotion recognition in E-learning environments

    NARCIS (Netherlands)

    Bahreini, Kiavash; Nadolski, Rob; Westera, Wim

    2014-01-01

    This paper presents a framework (FILTWAM (Framework for Improving Learning Through Webcams And Microphones)) for real-time emotion recognition in e-learning by using webcams. FILTWAM offers timely and relevant feedback based upon learner’s facial expressions and verbalizations. FILTWAM’s facial

  5. Deep learning the dynamic appearance and shape of facial action units

    OpenAIRE

    Jaiswal, Shashank; Valstar, Michel F.

    2016-01-01

    Spontaneous facial expression recognition under uncontrolled conditions is a hard task. It depends on multiple factors including shape, appearance and dynamics of the facial features, all of which are adversely affected by environmental noise and low intensity signals typical of such conditions. In this work, we present a novel approach to Facial Action Unit detection using a combination of Convolutional and Bi-directional Long Short-Term Memory Neural Networks (CNN-BLSTM), which jointly lear...

  6. Emotional face recognition deficits and medication effects in pre-manifest through stage-II Huntington's disease.

    Science.gov (United States)

    Labuschagne, Izelle; Jones, Rebecca; Callaghan, Jenny; Whitehead, Daisy; Dumas, Eve M; Say, Miranda J; Hart, Ellen P; Justo, Damian; Coleman, Allison; Dar Santos, Rachelle C; Frost, Chris; Craufurd, David; Tabrizi, Sarah J; Stout, Julie C

    2013-05-15

    Facial emotion recognition impairments have been reported in Huntington's disease (HD). However, the nature of the impairments across the spectrum of HD remains unclear. We report on emotion recognition data from 344 participants comprising premanifest HD (PreHD) and early HD patients, and controls. In a test of recognition of facial emotions, we examined responses to six basic emotional expressions and neutral expressions. In addition, and within the early HD sample, we tested for differences on emotion recognition performance between those 'on' vs. 'off' neuroleptic or selective serotonin reuptake inhibitor (SSRI) medications. The PreHD groups showed significant (precognition, compared to controls, on fearful, angry and surprised faces; whereas the early HD groups were significantly impaired across all emotions including neutral expressions. In early HD, neuroleptic use was associated with worse facial emotion recognition, whereas SSRI use was associated with better facial emotion recognition. The findings suggest that emotion recognition impairments exist across the HD spectrum, but are relatively more widespread in manifest HD than in the premanifest period. Commonly prescribed medications to treat HD-related symptoms also appear to affect emotion recognition. These findings have important implications for interpersonal communication and medication usage in HD. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  7. Sex, Sexual Orientation, and Identification of Positive and Negative Facial Affect

    Science.gov (United States)

    Rahman, Qazi; Wilson, Glenn D.; Abrahams, Sharon

    2004-01-01

    Sex and sexual orientation related differences in processing of happy and sad facial emotions were examined using an experimental facial emotion recognition paradigm with a large sample (N=240). Analysis of covariance (controlling for age and IQ) revealed that women (irrespective of sexual orientation) had faster reaction times than men for…

  8. LGBT Family Lawyers and Same-Sex Marriage Recognition: How Legal Change Shapes Professional Identity and Practice.

    Science.gov (United States)

    Baumle, Amanda K

    2018-01-10

    Lawyers who practice family law for LGBT clients are key players in the tenuous and evolving legal environment surrounding same-sex marriage recognition. Building on prior research on factors shaping the professional identities of lawyers generally, and activist lawyers specifically, I examine how practice within a rapidly changing, patchwork legal environment shapes professional identity for this group of lawyers. I draw on interviews with 21 LGBT family lawyers to analyze how the unique features of LGBT family law shape their professional identities and practice, as well as their predictions about the development of the practice in a post-Obergefell world. Findings reveal that the professional identities and practice of LGBT family lawyers are shaped by uncertainty, characteristics of activist lawyering, community membership, and community service. Individual motivations and institutional forces work to generate a professional identity that is resilient and dynamic, characterized by skepticism and distrust coupled with flexibility and creativity. These features are likely to play a role in the evolution of the LGBT family lawyer professional identity post-marriage equality.

  9. Conceiving Human Interaction by Visualising Depth Data of Head Pose Changes and Emotion Recognition via Facial Expressions

    Directory of Open Access Journals (Sweden)

    Grigorios Kalliatakis

    2017-07-01

    Full Text Available Affective computing in general and human activity and intention analysis in particular comprise a rapidly-growing field of research. Head pose and emotion changes present serious challenges when applied to player’s training and ludology experience in serious games, or analysis of customer satisfaction regarding broadcast and web services, or monitoring a driver’s attention. Given the increasing prominence and utility of depth sensors, it is now feasible to perform large-scale collection of three-dimensional (3D data for subsequent analysis. Discriminative random regression forests were selected in order to rapidly and accurately estimate head pose changes in an unconstrained environment. In order to complete the secondary process of recognising four universal dominant facial expressions (happiness, anger, sadness and surprise, emotion recognition via facial expressions (ERFE was adopted. After that, a lightweight data exchange format (JavaScript Object Notation (JSON is employed, in order to manipulate the data extracted from the two aforementioned settings. Motivated by the need to generate comprehensible visual representations from different sets of data, in this paper, we introduce a system capable of monitoring human activity through head pose and emotion changes, utilising an affordable 3D sensing technology (Microsoft Kinect sensor.

  10. Adaptive metric learning with deep neural networks for video-based facial expression recognition

    Science.gov (United States)

    Liu, Xiaofeng; Ge, Yubin; Yang, Chao; Jia, Ping

    2018-01-01

    Video-based facial expression recognition has become increasingly important for plenty of applications in the real world. Despite that numerous efforts have been made for the single sequence, how to balance the complex distribution of intra- and interclass variations well between sequences has remained a great difficulty in this area. We propose the adaptive (N+M)-tuplet clusters loss function and optimize it with the softmax loss simultaneously in the training phrase. The variations introduced by personal attributes are alleviated using the similarity measurements of multiple samples in the feature space with many fewer comparison times as conventional deep metric learning approaches, which enables the metric calculations for large data applications (e.g., videos). Both the spatial and temporal relations are well explored by a unified framework that consists of an Inception-ResNet network with long short term memory and the two fully connected layer branches structure. Our proposed method has been evaluated with three well-known databases, and the experimental results show that our method outperforms many state-of-the-art approaches.

  11. The different faces of one’s self: an fMRI study into the recognition of current and past self-facial appearances

    Science.gov (United States)

    Apps, Matthew A. J.; Tajadura-Jiménez, Ana; Turley, Grainne; Tsakiris, Manos

    2013-01-01

    Mirror self-recognition is often considered as an index of self-awareness. Neuroimaging studies have identified a neural circuit specialised for the recognition of one’s own current facial appearance. However, faces change considerably over a lifespan, highlighting the necessity for representations of one’s face to continually be updated. We used fMRI to investigate the different neural circuits involved in the recognition of the childhood and current, adult, faces of one’s self. Participants viewed images of either their own face as it currently looks morphed with the face of a familiar other or their childhood face morphed with the childhood face of the familiar other. Activity in areas which have a generalised selectivity for faces, including the inferior occipital gyrus, the superior parietal lobule and the inferior temporal gyrus, varied with the amount of current self in an image. Activity in areas involved in memory encoding and retrieval, including the hippocampus and the posterior cingulate gyrus, and areas involved in creating a sense of body ownership, including the temporo-parietal junction and the inferior parietal lobule, varied with the amount of childhood self in an image. We suggest that the recognition of one’s own past or present face is underpinned by different cognitive processes in distinct neural circuits. Current self-recognition engages areas involved in perceptual face processing, whereas childhood self-recognition recruits networks involved in body ownership and memory processing. PMID:22940117

  12. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories.

    Science.gov (United States)

    Wang, Qiandong; Xiao, Naiqi G; Quinn, Paul C; Hu, Chao S; Qian, Miao; Fu, Genyue; Lee, Kang

    2015-02-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese, Caucasian, and racially ambiguous faces. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: Contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories

    Science.gov (United States)

    Wang, Qiandong; Xiao, Naiqi G.; Quinn, Paul C.; Hu, Chao S.; Qian, Miao; Fu, Genyue; Lee, Kang

    2014-01-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese faces, Caucasian faces, and racially ambiguous morphed face stimuli. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information of racial categories that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time. PMID:25497461

  14. Putting the face in context: Body expressions impact facial emotion processing in human infants

    Directory of Open Access Journals (Sweden)

    Purva Rajhans

    2016-06-01

    Full Text Available Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs. We primed infants with body postures (fearful, happy that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception.

  15. Schizotypy and impaired basic face recognition? Another non-confirmatory study.

    Science.gov (United States)

    Bell, Vaughan; Halligan, Peter

    2015-12-01

    Although schizotypy has been found to be reliably associated with a reduced recognition of facial affect, the few studies that have tested the association between basic face recognition abilities and schizotypy have found mixed results. This study formally tested the association in a large non-clinical sample with established neurological measures of face recognition. Two hundred and twenty-seven participants completed the Oxford-Liverpool Inventory of Feelings and Experiences schizotypy scale and completed the Famous Faces Test and the Cardiff Repeated Recognition Test for Faces. No association between any schizotypal dimension and performance on either of the facial recognition and learning tests was found. The null results can be accepted with a high degree of confidence. Further additional evidence is provided for a lack of association between schizotypy and basic face recognition deficits. © 2014 Wiley Publishing Asia Pty Ltd.

  16. Age-related differences in emotion recognition ability: a cross-sectional study.

    Science.gov (United States)

    Mill, Aire; Allik, Jüri; Realo, Anu; Valk, Raivo

    2009-10-01

    Experimental studies indicate that recognition of emotions, particularly negative emotions, decreases with age. However, there is no consensus at which age the decrease in emotion recognition begins, how selective this is to negative emotions, and whether this applies to both facial and vocal expression. In the current cross-sectional study, 607 participants ranging in age from 18 to 84 years (mean age = 32.6 +/- 14.9 years) were asked to recognize emotions expressed either facially or vocally. In general, older participants were found to be less accurate at recognizing emotions, with the most distinctive age difference pertaining to a certain group of negative emotions. Both modalities revealed an age-related decline in the recognition of sadness and -- to a lesser degree -- anger, starting at about 30 years of age. Although age-related differences in the recognition of expression of emotion were not mediated by personality traits, 2 of the Big 5 traits, openness and conscientiousness, made an independent contribution to emotion-recognition performance. Implications of age-related differences in facial and vocal emotion expression and early onset of the selective decrease in emotion recognition are discussed in terms of previous findings and relevant theoretical models.

  17. Adaptive evolution of facial colour patterns in Neotropical primates.

    Science.gov (United States)

    Santana, Sharlene E; Lynch Alfaro, Jessica; Alfaro, Michael E

    2012-06-07

    The rich diversity of primate faces has interested naturalists for over a century. Researchers have long proposed that social behaviours have shaped the evolution of primate facial diversity. However, the primate face constitutes a unique structure where the diverse and potentially competing functions of communication, ecology and physiology intersect, and the major determinants of facial diversity remain poorly understood. Here, we provide the first evidence for an adaptive role of facial colour patterns and pigmentation within Neotropical primates. Consistent with the hypothesis that facial patterns function in communication and species recognition, we find that species living in smaller groups and in sympatry with a higher number of congener species have evolved more complex patterns of facial colour. The evolution of facial pigmentation and hair length is linked to ecological factors, and ecogeographical rules related to UV radiation and thermoregulation are met by some facial regions. Our results demonstrate the interaction of behavioural and ecological factors in shaping one of the most outstanding facial diversities of any mammalian lineage.

  18. Effects of Orientation on Recognition of Facial Affect

    Science.gov (United States)

    Cohen, M. M.; Mealey, J. B.; Hargens, Alan R. (Technical Monitor)

    1997-01-01

    The ability to discriminate facial features is often degraded when the orientation of the face and/or the observer is altered. Previous studies have shown that gross distortions of facial features can go unrecognized when the image of the face is inverted, as exemplified by the 'Margaret Thatcher' effect. This study examines how quickly erect and supine observers can distinguish between smiling and frowning faces that are presented at various orientations. The effects of orientation are of particular interest in space, where astronauts frequently view one another in orientations other than the upright. Sixteen observers viewed individual facial images of six people on a computer screen; on a given trial, the image was either smiling or frowning. Each image was viewed when it was erect and when it was rotated (rolled) by 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees and 270 degrees about the line of sight. The observers were required to respond as rapidly and accurately as possible to identify if the face presented was smiling or frowning. Measures of reaction time were obtained when the observers were both upright and supine. Analyses of variance revealed that mean reaction time, which increased with stimulus rotation (F=18.54, df 7/15, p (is less than) 0.001), was 22% longer when the faces were inverted than when they were erect, but that the orientation of the observer had no significant effect on reaction time (F=1.07, df 1/15, p (is greater than) .30). These data strongly suggest that the orientation of the image of a face on the observer's retina, but not its orientation with respect to gravity, is important in identifying the expression on the face.

  19. Towards Multimodal Emotion Recognition in E-Learning Environments

    Science.gov (United States)

    Bahreini, Kiavash; Nadolski, Rob; Westera, Wim

    2016-01-01

    This paper presents a framework (FILTWAM (Framework for Improving Learning Through Webcams And Microphones)) for real-time emotion recognition in e-learning by using webcams. FILTWAM offers timely and relevant feedback based upon learner's facial expressions and verbalizations. FILTWAM's facial expression software module has been developed and…

  20. Helping, mediating, and gaining recognition: The everyday identity work of Romanian health social workers.

    Science.gov (United States)

    Ciocănel, Alexandra; Lazăr, Florin; Munch, Shari; Harmon, Cara; Rentea, Georgiana-Cristina; Gaba, Daniela; Mihai, Anca

    2018-03-01

    Health social work is a field with challenges, opportunities, and ways of professing social work that may vary between different national contexts. In this article, we look at how Romanian health social workers construct their professional identity through their everyday identity work. Drawing on a qualitative study based on interviews with 21 health social workers working in various organizational contexts, we analyze what health social workers say they do and how this shapes their self-conception as professionals. Four main themes emerged from participants' descriptions: being a helping professional, being a mediator, gaining recognition, and contending with limits. Through these themes, participants articulated the everyday struggles and satisfactions specific to working as recently recognized professionals in Romanian health and welfare systems not always supportive of their work.

  1. Improving the Quality of Facial Composites Using a Holistic Cognitive Interview

    Science.gov (United States)

    Frowd, Charlie D.; Bruce, Vicki; Smith, Ashley J.; Hancock, Peter J. B.

    2008-01-01

    Witnesses to and victims of serious crime are normally asked to describe the appearance of a criminal suspect, using a Cognitive Interview (CI), and to construct a facial composite, a visual representation of the face. Research suggests that focusing on the global aspects of a face, as opposed to its facial features, facilitates recognition and…

  2. [Recognition of facial expressions of emotions by 3-year-olds depending on sleep and risk of depression].

    Science.gov (United States)

    Bat-Pitault, F; Da Fonseca, D; Flori, S; Porcher-Guinet, V; Stagnara, C; Patural, H; Franco, P; Deruelle, C

    2017-10-01

    The emotional process is characterized by a negative bias in depression, thus it was legitimate to establish if they same is true in very young at-risk children. Furthermore, sleep, also proposed as a marker of the depression risk, is closely linked in adults and adolescents with emotions. That is why we wanted first to better describe the characteristics of emotional recognition by 3-year-olds and their links with sleep. Secondly we observed, if found at this young age, an emotional recognition pattern indicating a vulnerability to depression. We studied, in 133 children aged 36 months from the AuBE cohort, the number of correct answers to the task of recognition of facial emotions (joy, anger and sadness). Cognitive functions were also assessed by the WPPSI III at 3 years old, and the different sleep parameters (time of light off and light on, sleep times, difficulty to go to sleep and number of parents' awakes per night) were described by questionnaires filled out by mothers at 6, 12, 18, 24 and 36 months after birth. Of these 133 children, 21 children whose mothers had at least one history of depression (13 boys) were the high-risk group and 19 children (8 boys) born to women with no history of depression were the low-risk group (or control group). Overall, 133 children by the age of 36 months recognize significantly better happiness than other emotions (P=0.000) with a better global recognition higher in girls (M=8.8) than boys (M=7.8) (P=0.013) and a positive correlation between global recognition ability and verbal IQ (P=0.000). Children who have less daytime sleep at 18 months and those who sleep less at 24 months show a better recognition of sadness (P=0.043 and P=0.042); those with difficulties at bedtime at 18 months recognize less happiness (P=0.043), and those who awaken earlier at 24 months have a better global recognition of emotions (P=0.015). Finally, the boys of the high-risk group recognize sadness better than boys in the control group (P=0

  3. Frontal Mucocele following Previous Facial Trauma with Hardware Reconstruction

    OpenAIRE

    EuDaly, Megan; Kraus, Chadd K.

    2016-01-01

    Mucoceles are cysts that can develop after facial bone fractures, especially those involving the frontal sinuses. Despite being rare, mucoceles can result in serious delayed sequelae. We present a case of a frontal mucocele that developed two years after extensive facial trauma following a motor vehicle crash (MVC) and review the emergency department (ED) evaluation and treatment of mucocele. Early recognition, appropriate imaging, and an interdisciplinary approach are essential for managing ...

  4. Global facial beauty: approaching a unified aesthetic ideal.

    Science.gov (United States)

    Sands, Noah B; Adamson, Peter A

    2014-04-01

    Recognition of facial beauty is both inborn and learned through social discourses and exposures. Demographic shifts across the globe, in addition to cross-cultural interactions that typify 21st century globalization in virtually all industries, comprise major active evolutionary forces that reshape our individual notions of facial beauty. This article highlights the changing perceptions of beauty, while defining and distinguishing natural beauty and artificial beauty. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  5. Not just fear and sadness: meta-analytic evidence of pervasive emotion recognition deficits for facial and vocal expressions in psychopathy.

    Science.gov (United States)

    Dawel, Amy; O'Kearney, Richard; McKone, Elinor; Palermo, Romina

    2012-11-01

    The present meta-analysis aimed to clarify whether deficits in emotion recognition in psychopathy are restricted to certain emotions and modalities or whether they are more pervasive. We also attempted to assess the influence of other important variables: age, and the affective factor of psychopathy. A systematic search of electronic databases and a subsequent manual search identified 26 studies that included 29 experiments (N = 1376) involving six emotion categories (anger, disgust, fear, happiness, sadness, surprise) across three modalities (facial, vocal, postural). Meta-analyses found evidence of pervasive impairments across modalities (facial and vocal) with significant deficits evident for several emotions (i.e., not only fear and sadness) in both adults and children/adolescents. These results are consistent with recent theorizing that the amygdala, which is believed to be dysfunctional in psychopathy, has a broad role in emotion processing. We discuss limitations of the available data that restrict the ability of meta-analysis to consider the influence of age and separate the sub-factors of psychopathy, highlighting important directions for future research. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Impaired Attribution of Emotion to Facial Expressions in Anxiety and Major Depression

    NARCIS (Netherlands)

    Demenescu, Liliana R.; Kortekaas, Rudie; den Boer, Johan A.; Aleman, Andre

    2010-01-01

    Background: Recognition of others' emotions is an important aspect of interpersonal communication. In major depression, a significant emotion recognition impairment has been reported. It remains unclear whether the ability to recognize emotion from facial expressions is also impaired in anxiety

  7. Deficits in facial emotion recognition indicate behavioral changes and impaired self-awareness after moderate to severe traumatic brain injury.

    Science.gov (United States)

    Spikman, Jacoba M; Milders, Maarten V; Visser-Keizer, Annemarie C; Westerhof-Evers, Herma J; Herben-Dekker, Meike; van der Naalt, Joukje

    2013-01-01

    Traumatic brain injury (TBI) is a leading cause of disability, specifically among younger adults. Behavioral changes are common after moderate to severe TBI and have adverse consequences for social and vocational functioning. It is hypothesized that deficits in social cognition, including facial affect recognition, might underlie these behavioral changes. Measurement of behavioral deficits is complicated, because the rating scales used rely on subjective judgement, often lack specificity and many patients provide unrealistically positive reports of their functioning due to impaired self-awareness. Accordingly, it is important to find performance based tests that allow objective and early identification of these problems. In the present study 51 moderate to severe TBI patients in the sub-acute and chronic stage were assessed with a test for emotion recognition (FEEST) and a questionnaire for behavioral problems (DEX) with a self and proxy rated version. Patients performed worse on the total score and on the negative emotion subscores of the FEEST than a matched group of 31 healthy controls. Patients also exhibited significantly more behavioral problems on both the DEX self and proxy rated version, but proxy ratings revealed more severe problems. No significant correlation was found between FEEST scores and DEX self ratings. However, impaired emotion recognition in the patients, and in particular of Sadness and Anger, was significantly correlated with behavioral problems as rated by proxies and with impaired self-awareness. This is the first study to find these associations, strengthening the proposed recognition of social signals as a condition for adequate social functioning. Hence, deficits in emotion recognition can be conceived as markers for behavioral problems and lack of insight in TBI patients. This finding is also of clinical importance since, unlike behavioral problems, emotion recognition can be objectively measured early after injury, allowing for early

  8. Deficits in facial emotion recognition indicate behavioral changes and impaired self-awareness after moderate to severe traumatic brain injury.

    Directory of Open Access Journals (Sweden)

    Jacoba M Spikman

    Full Text Available Traumatic brain injury (TBI is a leading cause of disability, specifically among younger adults. Behavioral changes are common after moderate to severe TBI and have adverse consequences for social and vocational functioning. It is hypothesized that deficits in social cognition, including facial affect recognition, might underlie these behavioral changes. Measurement of behavioral deficits is complicated, because the rating scales used rely on subjective judgement, often lack specificity and many patients provide unrealistically positive reports of their functioning due to impaired self-awareness. Accordingly, it is important to find performance based tests that allow objective and early identification of these problems. In the present study 51 moderate to severe TBI patients in the sub-acute and chronic stage were assessed with a test for emotion recognition (FEEST and a questionnaire for behavioral problems (DEX with a self and proxy rated version. Patients performed worse on the total score and on the negative emotion subscores of the FEEST than a matched group of 31 healthy controls. Patients also exhibited significantly more behavioral problems on both the DEX self and proxy rated version, but proxy ratings revealed more severe problems. No significant correlation was found between FEEST scores and DEX self ratings. However, impaired emotion recognition in the patients, and in particular of Sadness and Anger, was significantly correlated with behavioral problems as rated by proxies and with impaired self-awareness. This is the first study to find these associations, strengthening the proposed recognition of social signals as a condition for adequate social functioning. Hence, deficits in emotion recognition can be conceived as markers for behavioral problems and lack of insight in TBI patients. This finding is also of clinical importance since, unlike behavioral problems, emotion recognition can be objectively measured early after injury

  9. Emotion Recognition in Face and Body Motion in Bulimia Nervosa.

    Science.gov (United States)

    Dapelo, Marcela Marin; Surguladze, Simon; Morris, Robin; Tchanturia, Kate

    2017-11-01

    Social cognition has been studied extensively in anorexia nervosa (AN), but there are few studies in bulimia nervosa (BN). This study investigated the ability of people with BN to recognise emotions in ambiguous facial expressions and in body movement. Participants were 26 women with BN, who were compared with 35 with AN, and 42 healthy controls. Participants completed an emotion recognition task by using faces portraying blended emotions, along with a body emotion recognition task by using videos of point-light walkers. The results indicated that BN participants exhibited difficulties recognising disgust in less-ambiguous facial expressions, and a tendency to interpret non-angry faces as anger, compared with healthy controls. These difficulties were similar to those found in AN. There were no significant differences amongst the groups in body motion emotion recognition. The findings suggest that difficulties with disgust and anger recognition in facial expressions may be shared transdiagnostically in people with eating disorders. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2017 John Wiley & Sons, Ltd and Eating Disorders Association.

  10. Effects of sad mood on facial emotion recognition in Chinese people.

    Science.gov (United States)

    Lee, Tatia M C; Ng, Emily H H; Tang, S W; Chan, Chetwyn C H

    2008-05-30

    This study examined the influence of sad mood on the judgment of ambiguous facial emotion expressions among 47 healthy volunteers who had been induced to feel sad (n=13), neutral (n=15), or happy (n=19) emotions by watching video clips. The findings suggest that when the targets were ambiguous, participants who were in a sad mood tended to classify them in the negative emotional categories rather than the positive emotional categories. Also, this observation indicates that emotion-specific negative bias in the judgment of facial expressions is associated with a sad mood. The finding argues against a general impairment in decoding facial expressions. Furthermore, the observed mood-congruent negative bias was best predicted by spatial perception. The findings of this study provide insights into the cognitive processes underlying the interpersonal difficulties experienced by people in a sad mood, which may be predisposing factors in the development of clinical depression.

  11. Food-Induced Emotional Resonance Improves Emotion Recognition.

    Science.gov (United States)

    Pandolfi, Elisa; Sacripante, Riccardo; Cardini, Flavia

    2016-01-01

    The effect of food substances on emotional states has been widely investigated, showing, for example, that eating chocolate is able to reduce negative mood. Here, for the first time, we have shown that the consumption of specific food substances is not only able to induce particular emotional states, but more importantly, to facilitate recognition of corresponding emotional facial expressions in others. Participants were asked to perform an emotion recognition task before and after eating either a piece of chocolate or a small amount of fish sauce-which we expected to induce happiness or disgust, respectively. Our results showed that being in a specific emotional state improves recognition of the corresponding emotional facial expression. Indeed, eating chocolate improved recognition of happy faces, while disgusted expressions were more readily recognized after eating fish sauce. In line with the embodied account of emotion understanding, we suggest that people are better at inferring the emotional state of others when their own emotional state resonates with the observed one.

  12. Food-Induced Emotional Resonance Improves Emotion Recognition

    Science.gov (United States)

    Pandolfi, Elisa; Sacripante, Riccardo; Cardini, Flavia

    2016-01-01

    The effect of food substances on emotional states has been widely investigated, showing, for example, that eating chocolate is able to reduce negative mood. Here, for the first time, we have shown that the consumption of specific food substances is not only able to induce particular emotional states, but more importantly, to facilitate recognition of corresponding emotional facial expressions in others. Participants were asked to perform an emotion recognition task before and after eating either a piece of chocolate or a small amount of fish sauce—which we expected to induce happiness or disgust, respectively. Our results showed that being in a specific emotional state improves recognition of the corresponding emotional facial expression. Indeed, eating chocolate improved recognition of happy faces, while disgusted expressions were more readily recognized after eating fish sauce. In line with the embodied account of emotion understanding, we suggest that people are better at inferring the emotional state of others when their own emotional state resonates with the observed one. PMID:27973559

  13. Lonely adolescents exhibit heightened sensitivity for facial cues of emotion.

    Science.gov (United States)

    Vanhalst, Janne; Gibb, Brandon E; Prinstein, Mitchell J

    2017-02-01

    Contradicting evidence exists regarding the link between loneliness and sensitivity to facial cues of emotion, as loneliness has been related to better but also to worse performance on facial emotion recognition tasks. This study aims to contribute to this debate and extends previous work by (a) focusing on both accuracy and sensitivity to detecting positive and negative expressions, (b) controlling for depressive symptoms and social anxiety, and (c) using an advanced emotion recognition task with videos of neutral adolescent faces gradually morphing into full-intensity expressions. Participants were 170 adolescents (49% boys; M age  = 13.65 years) from rural, low-income schools. Results showed that loneliness was associated with increased sensitivity to happy, sad, and fear faces. When controlling for depressive symptoms and social anxiety, loneliness remained significantly associated with sensitivity to sad and fear faces. Together, these results suggest that lonely adolescents are vigilant to negative facial cues of emotion.

  14. Borrowed beauty? Understanding identity in Asian facial cosmetic surgery

    NARCIS (Netherlands)

    Aquino, Y.S.; Steinkamp, N.L.

    2016-01-01

    This review aims to identify (1) sources of knowledge and (2) important themes of the ethical debate related to surgical alteration of facial features in East Asians. This article integrates narrative and systematic review methods. In March 2014, we searched databases including PubMed, Philosopher's

  15. Investigating emotion recognition and empathy deficits in Conduct Disorder using behavioural and eye-tracking methods

    OpenAIRE

    Martin-Key, Nayra, Anna

    2017-01-01

    The aim of this thesis was to characterise the nature of the emotion recognition and empathy deficits observed in male and female adolescents with Conduct Disorder (CD) and varying levels of callous-unemotional (CU) traits. The first two experiments employed behavioural tasks with concurrent eye-tracking methods to explore the mechanisms underlying facial and body expression recognition deficits. Having CD and being male independently predicted poorer facial expression recognition across all ...

  16. Impaired Emotional Mirroring in Parkinson’s Disease—A Study on Brain Activation during Processing of Facial Expressions

    Directory of Open Access Journals (Sweden)

    Anna Pohl

    2017-12-01

    Full Text Available BackgroundAffective dysfunctions are common in patients with Parkinson’s disease, but the underlying neurobiological deviations have rarely been examined. Parkinson’s disease is characterized by a loss of dopamine neurons in the substantia nigra resulting in impairment of motor and non-motor basal ganglia-cortical loops. Concerning emotional deficits, some studies provide evidence for altered brain processing in limbic- and lateral-orbitofrontal gating loops. In a second line of evidence, human premotor and inferior parietal homologs of mirror neuron areas were involved in processing and understanding of emotional facial expressions. We examined deviations in brain activation during processing of facial expressions in patients and related these to emotion recognition accuracy.Methods13 patients and 13 healthy controls underwent an emotion recognition task and a functional magnetic resonance imaging (fMRI measurement. In the Emotion Hexagon test, participants were presented with blends of two emotions and had to indicate which emotion best described the presented picture. Blended pictures with three levels of difficulty were included. During fMRI scanning, participants observed video clips depicting emotional, non-emotional, and neutral facial expressions or were asked to produce these facial expressions themselves.ResultsPatients performed slightly worse in the emotion recognition task, but only when judging the most ambiguous facial expressions. Both groups activated inferior frontal and anterior inferior parietal homologs of mirror neuron areas during observation and execution of the emotional facial expressions. During observation, responses in the pars opercularis of the right inferior frontal gyrus, in the bilateral inferior parietal lobule and in the bilateral supplementary motor cortex were decreased in patients. Furthermore, in patients, activation of the right anterior inferior parietal lobule was positively related to accuracy in

  17. The time course of individual face recognition: A pattern analysis of ERP signals.

    Science.gov (United States)

    Nemrodov, Dan; Niemeier, Matthias; Mok, Jenkin Ngo Yin; Nestor, Adrian

    2016-05-15

    An extensive body of work documents the time course of neural face processing in the human visual cortex. However, the majority of this work has focused on specific temporal landmarks, such as N170 and N250 components, derived through univariate analyses of EEG data. Here, we take on a broader evaluation of ERP signals related to individual face recognition as we attempt to move beyond the leading theoretical and methodological framework through the application of pattern analysis to ERP data. Specifically, we investigate the spatiotemporal profile of identity recognition across variation in emotional expression. To this end, we apply pattern classification to ERP signals both in time, for any single electrode, and in space, across multiple electrodes. Our results confirm the significance of traditional ERP components in face processing. At the same time though, they support the idea that the temporal profile of face recognition is incompletely described by such components. First, we show that signals associated with different facial identities can be discriminated from each other outside the scope of these components, as early as 70ms following stimulus presentation. Next, electrodes associated with traditional ERP components as well as, critically, those not associated with such components are shown to contribute information to stimulus discriminability. And last, the levels of ERP-based pattern discrimination are found to correlate with recognition accuracy across subjects confirming the relevance of these methods for bridging brain and behavior data. Altogether, the current results shed new light on the fine-grained time course of neural face processing and showcase the value of novel methods for pattern analysis to investigating fundamental aspects of visual recognition. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Quantitative anatomical analysis of facial expression using a 3D motion capture system: Application to cosmetic surgery and facial recognition technology.

    Science.gov (United States)

    Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin

    2015-09-01

    The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots. © 2015 Wiley Periodicals, Inc.

  19. Assessment of perception of morphed facial expressions using the Emotion Recognition Task: normative data from healthy participants aged 8-75.

    Science.gov (United States)

    Kessels, Roy P C; Montagne, Barbara; Hendriks, Angelique W; Perrett, David I; de Haan, Edward H F

    2014-03-01

    The ability to recognize and label emotional facial expressions is an important aspect of social cognition. However, existing paradigms to examine this ability present only static facial expressions, suffer from ceiling effects or have limited or no norms. A computerized test, the Emotion Recognition Task (ERT), was developed to overcome these difficulties. In this study, we examined the effects of age, sex, and intellectual ability on emotion perception using the ERT. In this test, emotional facial expressions are presented as morphs gradually expressing one of the six basic emotions from neutral to four levels of intensity (40%, 60%, 80%, and 100%). The task was administered in 373 healthy participants aged 8-75. In children aged 8-17, only small developmental effects were found for the emotions anger and happiness, in contrast to adults who showed age-related decline on anger, fear, happiness, and sadness. Sex differences were present predominantly in the adult participants. IQ only minimally affected the perception of disgust in the children, while years of education were correlated with all emotions but surprise and disgust in the adult participants. A regression-based approach was adopted to present age- and education- or IQ-adjusted normative data for use in clinical practice. Previous studies using the ERT have demonstrated selective impairments on specific emotions in a variety of psychiatric, neurologic, or neurodegenerative patient groups, making the ERT a valuable addition to existing paradigms for the assessment of emotion perception. © 2013 The British Psychological Society.

  20. Processing of emotional facial expressions in Korsakoff's syndrome.

    NARCIS (Netherlands)

    Montagne, B.; Kessels, R.P.C.; Wester, A.J.; Haan, E.H.F. de

    2006-01-01

    Interpersonal contacts depend to a large extent on understanding emotional facial expressions of others. Several neurological conditions may affect proficiency in emotional expression recognition. It has been shown that chronic alcoholics are impaired in labelling emotional expressions. More