WorldWideScience

Sample records for facial identity recognition

  1. Facial Expression at Retrieval Affects Recognition of Facial Identity

    Directory of Open Access Journals (Sweden)

    Wenfeng eChen

    2015-06-01

    Full Text Available It is well known that memory can be modulated by emotional stimuli at the time of encoding and consolidation. For example, happy faces create better identity recognition than faces with certain other expressions. However, the influence of facial expression at the time of retrieval remains unknown in the literature. To separate the potential influence of expression at retrieval from its effects at earlier stages, we had participants learn neutral faces but manipulated facial expression at the time of memory retrieval in a standard old/new recognition task. The results showed a clear effect of facial expression, where happy test faces were identified more successfully than angry test faces. This effect is unlikely due to greater image similarity between the neutral learning face and the happy test face, because image analysis showed that the happy test faces are in fact less similar to the neutral learning faces relative to the angry test faces. In the second experiment, we investigated whether this emotional effect is influenced by the expression at the time of learning. We employed angry or happy faces as learning stimuli, and angry, happy, and neutral faces as test stimuli. The results showed that the emotional effect at retrieval is robust across different encoding conditions with happy or angry expressions. These findings indicate that emotional expressions affect the retrieval process in identity recognition, and identity recognition does not rely on emotional association between learning and test faces.

  2. Positive and negative facial emotional expressions: the effect on infants' and children's facial identity recognition

    OpenAIRE

    Brenna,

    2013-01-01

    Aim of the present study was to investigate the origin and the development of the interdipendence between identity recognition and facial emotional expression processing, suggested by recent models on face processing (Calder & Young, 2005) and supported by outcomes on adults (e.g. Baudouin, Gilibert, Sansone, & Tiberghien, 2000; Schweinberger & Soukup, 1998). Particularly the effect of facial emotional expressions on infants’ and children’s ability to recognize identity of a face was explored...

  3. Facial expression influences face identity recognition during the attentional blink.

    Science.gov (United States)

    Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J

    2014-12-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

  4. Facial identity recognition in the broader autism phenotype.

    Directory of Open Access Journals (Sweden)

    C Ellie Wilson

    Full Text Available BACKGROUND: The 'broader autism phenotype' (BAP refers to the mild expression of autistic-like traits in the relatives of individuals with autism spectrum disorder (ASD. Establishing the presence of ASD traits provides insight into which traits are heritable in ASD. Here, the ability to recognise facial identity was tested in 33 parents of ASD children. METHODOLOGY AND RESULTS: In experiment 1, parents of ASD children completed the Cambridge Face Memory Test (CFMT, and a questionnaire assessing the presence of autistic personality traits. The parents, particularly the fathers, were impaired on the CFMT, but there were no associations between face recognition ability and autistic personality traits. In experiment 2, parents and probands completed equivalent versions of a simple test of face matching. On this task, the parents were not impaired relative to typically developing controls, however the proband group was impaired. Crucially, the mothers' face matching scores correlated with the probands', even when performance on an equivalent test of matching non-face stimuli was controlled for. CONCLUSIONS AND SIGNIFICANCE: Components of face recognition ability are impaired in some relatives of ASD individuals. Results suggest that face recognition skills are heritable in ASD, and genetic and environmental factors accounting for the pattern of heritability are discussed. In general, results demonstrate the importance of assessing the skill level in the proband when investigating particular characteristics of the BAP.

  5. Discriminability effect on Garner interference: evidence from recognition of facial identity and expression

    Directory of Open Access Journals (Sweden)

    Yamin eWang

    2013-12-01

    Full Text Available Using Garner’s speeded classification task existing studies demonstrated an asymmetric interference in the recognition of facial identity and facial expression. It seems that expression is hard to interfere with identity recognition. However, discriminability of identity and expression, a potential confounding variable, had not been carefully examined in existing studies. In current work, we manipulated discriminability of identity and expression by matching facial shape (long or round in identity and matching mouth (opened or closed in facial expression. Garner interference was found either from identity to expression (Experiment 1 or from expression to identity (Experiment 2. Interference was also found in both directions (Experiment 3 or in neither direction (Experiment 4. The results support that Garner interference tends to occur under condition of low discriminability of relevant dimension regardless of facial property. Our findings indicate that Garner interference is not necessarily related to interdependent processing in recognition of facial identity and expression. The findings also suggest that discriminability as a mediating factor should be carefully controlled in future research.

  6. Facial Recognition

    National Research Council Canada - National Science Library

    Mihalache Sergiu; Stoica Mihaela-Zoica

    2014-01-01

    .... From birth, faces are important in the individual's social interaction. Face perceptions are very complex as the recognition of facial expressions involves extensive and diverse areas in the brain...

  7. Visual scan paths and recognition of facial identity in autism spectrum disorder and typical development.

    Science.gov (United States)

    Wilson, C Ellie; Palermo, Romina; Brock, Jon

    2012-01-01

    Previous research suggests that many individuals with autism spectrum disorder (ASD) have impaired facial identity recognition, and also exhibit abnormal visual scanning of faces. Here, two hypotheses accounting for an association between these observations were tested: i) better facial identity recognition is associated with increased gaze time on the Eye region; ii) better facial identity recognition is associated with increased eye-movements around the face. Eye-movements of 11 children with ASD and 11 age-matched typically developing (TD) controls were recorded whilst they viewed a series of faces, and then completed a two alternative forced-choice recognition memory test for the faces. Scores on the memory task were standardized according to age. In both groups, there was no evidence of an association between the proportion of time spent looking at the Eye region of faces and age-standardized recognition performance, thus the first hypothesis was rejected. However, the 'Dynamic Scanning Index'--which was incremented each time the participant saccaded into and out of one of the core-feature interest areas--was strongly associated with age-standardized face recognition scores in both groups, even after controlling for various other potential predictors of performance. In support of the second hypothesis, results suggested that increased saccading between core-features was associated with more accurate face recognition ability, both in typical development and ASD. Causal directions of this relationship remain undetermined.

  8. Visual scan paths and recognition of facial identity in autism spectrum disorder and typical development.

    Directory of Open Access Journals (Sweden)

    C Ellie Wilson

    Full Text Available BACKGROUND: Previous research suggests that many individuals with autism spectrum disorder (ASD have impaired facial identity recognition, and also exhibit abnormal visual scanning of faces. Here, two hypotheses accounting for an association between these observations were tested: i better facial identity recognition is associated with increased gaze time on the Eye region; ii better facial identity recognition is associated with increased eye-movements around the face. METHODOLOGY AND PRINCIPAL FINDINGS: Eye-movements of 11 children with ASD and 11 age-matched typically developing (TD controls were recorded whilst they viewed a series of faces, and then completed a two alternative forced-choice recognition memory test for the faces. Scores on the memory task were standardized according to age. In both groups, there was no evidence of an association between the proportion of time spent looking at the Eye region of faces and age-standardized recognition performance, thus the first hypothesis was rejected. However, the 'Dynamic Scanning Index'--which was incremented each time the participant saccaded into and out of one of the core-feature interest areas--was strongly associated with age-standardized face recognition scores in both groups, even after controlling for various other potential predictors of performance. CONCLUSIONS AND SIGNIFICANCE: In support of the second hypothesis, results suggested that increased saccading between core-features was associated with more accurate face recognition ability, both in typical development and ASD. Causal directions of this relationship remain undetermined.

  9. Reduced Reliance on Optimal Facial Information for Identity Recognition in Autism Spectrum Disorder

    Science.gov (United States)

    Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.

    2013-01-01

    Previous research into face processing in autism spectrum disorder (ASD) has revealed atypical biases toward particular facial information during identity recognition. Specifically, a focus on features (or high spatial frequencies [HSFs]) has been reported for both face and nonface processing in ASD. The current study investigated the development…

  10. Attention to Social Stimuli and Facial Identity Recognition Skills in Autism Spectrum Disorder

    Science.gov (United States)

    Wilson, C. E.; Brock, J.; Palermo, R.

    2010-01-01

    Background: Previous research suggests that individuals with autism spectrum disorder (ASD) have a reduced preference for viewing social stimuli in the environment and impaired facial identity recognition. Methods: Here, we directly tested a link between these two phenomena in 13 ASD children and 13 age-matched typically developing (TD) controls.…

  11. Attention to Social Stimuli and Facial Identity Recognition Skills in Autism Spectrum Disorder

    Science.gov (United States)

    Wilson, C. E.; Brock, J.; Palermo, R.

    2010-01-01

    Background: Previous research suggests that individuals with autism spectrum disorder (ASD) have a reduced preference for viewing social stimuli in the environment and impaired facial identity recognition. Methods: Here, we directly tested a link between these two phenomena in 13 ASD children and 13 age-matched typically developing (TD) controls.…

  12. Reduced Reliance on Optimal Facial Information for Identity Recognition in Autism Spectrum Disorder

    Science.gov (United States)

    Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.

    2013-01-01

    Previous research into face processing in autism spectrum disorder (ASD) has revealed atypical biases toward particular facial information during identity recognition. Specifically, a focus on features (or high spatial frequencies [HSFs]) has been reported for both face and nonface processing in ASD. The current study investigated the development…

  13. [Prosopagnosia and facial expression recognition].

    Science.gov (United States)

    Koyama, Shinichi

    2014-04-01

    This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.

  14. Facial Recognition

    Directory of Open Access Journals (Sweden)

    Mihalache Sergiu

    2014-05-01

    Full Text Available During their lifetime, people learn to recognize thousands of faces that they interact with. Face perception refers to an individual's understanding and interpretation of the face, particularly the human face, especially in relation to the associated information processing in the brain. The proportions and expressions of the human face are important to identify origin, emotional tendencies, health qualities, and some social information. From birth, faces are important in the individual's social interaction. Face perceptions are very complex as the recognition of facial expressions involves extensive and diverse areas in the brain. Our main goal is to put emphasis on presenting human faces specialized studies, and also to highlight the importance of attractiviness in their retention. We will see that there are many factors that influence face recognition.

  15. Contributions of feature shapes and surface cues to the recognition and neural representation of facial identity.

    Science.gov (United States)

    Andrews, Timothy J; Baseler, Heidi; Jenkins, Rob; Burton, A Mike; Young, Andrew W

    2016-10-01

    A full understanding of face recognition will involve identifying the visual information that is used to discriminate different identities and how this is represented in the brain. The aim of this study was to explore the importance of shape and surface properties in the recognition and neural representation of familiar faces. We used image morphing techniques to generate hybrid faces that mixed shape properties (more specifically, second order spatial configural information as defined by feature positions in the 2D-image) from one identity and surface properties from a different identity. Behavioural responses showed that recognition and matching of these hybrid faces was primarily based on their surface properties. These behavioural findings contrasted with neural responses recorded using a block design fMRI adaptation paradigm to test the sensitivity of Haxby et al.'s (2000) core face-selective regions in the human brain to the shape or surface properties of the face. The fusiform face area (FFA) and occipital face area (OFA) showed a lower response (adaptation) to repeated images of the same face (same shape, same surface) compared to different faces (different shapes, different surfaces). From the behavioural data indicating the critical contribution of surface properties to the recognition of identity, we predicted that brain regions responsible for familiar face recognition should continue to adapt to faces that vary in shape but not surface properties, but show a release from adaptation to faces that vary in surface properties but not shape. However, we found that the FFA and OFA showed an equivalent release from adaptation to changes in both shape and surface properties. The dissociation between the neural and perceptual responses suggests that, although they may play a role in the process, these core face regions are not solely responsible for the recognition of facial identity.

  16. Is facial emotion recognition impairment in schizophrenia identical for different emotions? A signal detection analysis.

    Science.gov (United States)

    Tsoi, Daniel T; Lee, Kwang-Hyuk; Khokhar, Waqqas A; Mir, Nusrat U; Swalli, Jaspal S; Gee, Kate A; Pluck, Graham; Woodruff, Peter W R

    2008-02-01

    Patients with schizophrenia have difficulty recognising the emotion that corresponds to a given facial expression. According to signal detection theory, two separate processes are involved in facial emotion perception: a sensory process (measured by sensitivity which is the ability to distinguish one facial emotion from another facial emotion) and a cognitive decision process (measured by response criterion which is the tendency to judge a facial emotion as a particular emotion). It is uncertain whether facial emotion recognition deficits in schizophrenia are primarily due to impaired sensitivity or response bias. In this study, we hypothesised that individuals with schizophrenia would have both diminished sensitivity and different response criteria in facial emotion recognition across different emotions compared with healthy controls. Twenty-five individuals with a DSM-IV diagnosis of schizophrenia were compared with age and IQ matched healthy controls. Participants performed a "yes-no" task by indicating whether the 88 Ekman faces shown briefly expressed one of the target emotions in three randomly ordered runs (happy, sad and fear). Sensitivity and response criteria for facial emotion recognition was calculated as d-prime and In(beta) respectively using signal detection theory. Patients with schizophrenia showed diminished sensitivity (d-prime) in recognising happy faces, but not faces that expressed fear or sadness. By contrast, patients exhibited a significantly less strict response criteria (In(beta)) in recognising fearful and sad faces. Our results suggest that patients with schizophrenia have a specific deficit in recognising happy faces, whereas they were more inclined to attribute any facial emotion as fearful or sad.

  17. Facial Expression Recognition

    NARCIS (Netherlands)

    Pantic, Maja; Li, S.; Jain, A.

    2009-01-01

    Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial

  18. PCA facial expression recognition

    Science.gov (United States)

    El-Hori, Inas H.; El-Momen, Zahraa K.; Ganoun, Ali

    2013-12-01

    This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. The comparative study of Facial Expression Recognition (FER) techniques namely Principal Component's analysis (PCA) and PCA with Gabor filters (GF) is done. The objective of this research is to show that PCA with Gabor filters is superior to the first technique in terms of recognition rate. To test and evaluates their performance, experiments are performed using real database by both techniques. The universally accepted five principal emotions to be recognized are: Happy, Sad, Disgust and Angry along with Neutral. The recognition rates are obtained on all the facial expressions.

  19. Effects of Spatial Frequencies on Recognition of Facial Identity and Facial Expression%空间频率信息对面孔身份与表情识别的影响

    Institute of Scientific and Technical Information of China (English)

    汪亚珉; 王志贤; 黄雅梅; 蒋静; 丁锦红

    2011-01-01

    已有面孔身份与表情识别研究提示,高频空间信息可能选择性地与表情识别有关,而低频空间信息则选择性地与身份识别有关.为验证这一假设,操纵空间频率设计三个Garner效应测量实验.实验1测量全频条件下身份表情识别之间的Garner效应,结果显示,相互间的干扰均显著.实验2测量高频条件下的干扰效应,发现表情识别的Garner效应不再显著而身份识别的Garner效应无明显变化,出现分离.实验3测量低频条件下的Garner效应,结果表明,表情与身份识别的Garner效应仍显著,未受高频过滤影响.基于Garner范式,提出面孔识别的可分离度与难度双指标同时考察的方法,对实验结果进行了分析,并由此得出结论,高频空间信息是面孔身份与表情信息分离的有效尺度.%By changing configural or featural/category information, White (2002) revealed that configural changes mainly interfered with facial identity processing while featural alterations largely reduced facial expression processing. With this technique, Goffaux, Hault, Michel, Vuongo, and Rossion (2005) presented that low spatial frequency played in configural changes detection. whereas featural changes detection depended on high spatial frequency. Based on these two studies, we can draw a conclusion that low spatial frequency plays an important role in facial identity recognition while high spatial frequency plays in facial expression recognition. Can this conclusion be really supported by experiments?To test this hypothesis, we conducted three Garner experiments in current study. In terms of the hypothesis,high spatial frequency enhances facial expression recognition but not facial identity recognition, while low spatial frequency facilitates facial identity recognition but not facial expression recognition. Dissociation could be found in recognition of facial identity and facial expression.Three Garner experiments were performed on 96

  20. Perception of facial expression and facial identity in subjects with social developmental disorders.

    Science.gov (United States)

    Hefter, Rebecca L; Manoach, Dara S; Barton, Jason J S

    2005-11-22

    It has been hypothesized that the social dysfunction in social developmental disorders (SDDs), such as autism, Asperger disorder, and the socioemotional processing disorder, impairs the acquisition of normal face-processing skills. The authors investigated whether this purported perceptual deficit was generalized to both facial expression and facial identity or whether these different types of facial perception were dissociated in SDDs. They studied 26 adults with a variety of SDD diagnoses, assessing their ability to discriminate famous from anonymous faces, their perception of emotional expression from facial and nonfacial cues, and the relationship between these abilities. They also compared the performance of two defined subgroups of subjects with SDDs on expression analysis: one with normal and one with impaired recognition of facial identity. While perception of facial expression was related to the perception of nonfacial expression, the perception of facial identity was not related to either facial or nonfacial expression. Likewise, subjects with SDDs with impaired facial identity processing perceived facial expression as well as those with normal facial identity processing. The processing of facial identity and that of facial expression are dissociable in social developmental disorders. Deficits in perceiving facial expression may be related to emotional processing more than face processing. Dissociations between the perception of facial identity and facial emotion are consistent with current cognitive models of face processing. The results argue against hypotheses that the social dysfunction in social developmental disorder causes a generalized failure to acquire face-processing skills.

  1. Recognition of facial emotions and identity in patients with mesial temporal lobe and idiopathic generalized epilepsy: an eye-tracking study.

    Science.gov (United States)

    Gomez-Ibañez, Asier; Urrestarazu, Elena; Viteri, Cesar

    2014-11-01

    To describe visual scanning pattern for facial identity recognition (FIR) and emotion recognition (FER) in patients with idiopathic generalized (IGE) and mesial temporal lobe epilepsy (MTLE). Secondary endpoint was to correlate the results with cognitive function. Benton Facial Recognition Test (BFRT) and Ekman&Friesen series were performed for FIR and FER respectively in 23 controls, 20 IGE and 19 MTLE patients. Eye movements were recorded by a Hi-Speed eye-tracker system. Neuropsychological tools explored cognitive function. Correct FIR rate was 78% in controls, 70.7% in IGE and 67.4% (p=0.009) in MTLE patients. FER hits reached 82.7% in controls, 74.3% in IGE (p=0.006) and 73.4% in MTLE (p=0.002) groups. IGE patients failed in disgust (p=0.005) and MTLE ones in fear (p=0.009) and disgust (p=0.03). FER correlated with neuropsychological scores, particularly verbal fluency (r=0.542, p<0.001). Eye-tracking revealed that controls scanned faces more diffusely than IGE and MTLE patients for FIR, who tended to top facial areas. A longer scanning of the top facial area was found in the three groups for FER. Gap between top and bottom facial region fixation time decreased in MTLE patients, with more but shorter fixations in bottom facial region. However, none of these findings were statistically significant. FIR was impaired in MTLE patients, and FER in both IGE and MTLE, particularly for fear and disgust. Although not statistically significant, those with impaired FER tended to perform more diffuse eye-tracking over the faces and have cognitive dysfunction. Copyright © 2014 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.

  2. [Neurological disease and facial recognition].

    Science.gov (United States)

    Kawamura, Mitsuru; Sugimoto, Azusa; Kobayakawa, Mutsutaka; Tsuruya, Natsuko

    2012-07-01

    To discuss the neurological basis of facial recognition, we present our case reports of impaired recognition and a review of previous literature. First, we present a case of infarction and discuss prosopagnosia, which has had a large impact on face recognition research. From a study of patient symptoms, we assume that prosopagnosia may be caused by unilateral right occipitotemporal lesion and right cerebral dominance of facial recognition. Further, circumscribed lesion and degenerative disease may also cause progressive prosopagnosia. Apperceptive prosopagnosia is observed in patients with posterior cortical atrophy (PCA), pathologically considered as Alzheimer's disease, and associative prosopagnosia in frontotemporal lobar degeneration (FTLD). Second, we discuss face recognition as part of communication. Patients with Parkinson disease show social cognitive impairments, such as difficulty in facial expression recognition and deficits in theory of mind as detected by the reading the mind in the eyes test. Pathological and functional imaging studies indicate that social cognitive impairment in Parkinson disease is possibly related to damages in the amygdalae and surrounding limbic system. The social cognitive deficits can be observed in the early stages of Parkinson disease, and even in the prodromal stage, for example, patients with rapid eye movement (REM) sleep behavior disorder (RBD) show impairment in facial expression recognition. Further, patients with myotonic dystrophy type 1 (DM 1), which is a multisystem disease that mainly affects the muscles, show social cognitive impairment similar to that of Parkinson disease. Our previous study showed that facial expression recognition impairment of DM 1 patients is associated with lesion in the amygdalae and insulae. Our study results indicate that behaviors and personality traits in DM 1 patients, which are revealed by social cognitive impairment, are attributable to dysfunction of the limbic system.

  3. Facial Expression Recognition Using SVM Classifier

    OpenAIRE

    2015-01-01

    Facial feature tracking and facial actions recognition from image sequence attracted great attention in computer vision field. Computational facial expression analysis is a challenging research topic in computer vision. It is required by many applications such as human-computer interaction, computer graphic animation and automatic facial expression recognition. In recent years, plenty of computer vision techniques have been developed to track or recognize the facial activities in three levels...

  4. Heartbeat Signal from Facial Video for Biometric Recognition

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2015-01-01

    Different biometric traits such as face appearance and heartbeat signal from Electrocardiogram (ECG)/Phonocardiogram (PCG) are widely used in the human identity recognition. Recent advances in facial video based measurement of cardio-physiological parameters such as heartbeat rate, respiratory rate......, and blood volume pressure provide the possibility of extracting heartbeat signal from facial video instead of using obtrusive ECG or PCG sensors in the body. This paper proposes the Heartbeat Signal from Facial Video (HSFV) as a new biometric trait for human identity recognition, for the first time...

  5. Facial Action Units Recognition: A Comparative Study

    NARCIS (Netherlands)

    Popa, M.C.; Rothkrantz, L.J.M.; Wiggers, P.; Braspenning, R.A.C.; Shan, C.

    2011-01-01

    Many approaches to facial expression recognition focus on assessing the six basic emotions (anger, disgust, happiness, fear, sadness, and surprise). Real-life situations proved to produce many more subtle facial expressions. A reliable way of analyzing the facial behavior is the Facial Action Coding

  6. Facial Action Units Recognition: A Comparative Study

    NARCIS (Netherlands)

    Popa, M.C.; Rothkrantz, L.J.M.; Wiggers, P.; Braspenning, R.A.C.; Shan, C.

    2011-01-01

    Many approaches to facial expression recognition focus on assessing the six basic emotions (anger, disgust, happiness, fear, sadness, and surprise). Real-life situations proved to produce many more subtle facial expressions. A reliable way of analyzing the facial behavior is the Facial Action Coding

  7. Recognition of Face and Emotional Facial Expressions in Autism

    Directory of Open Access Journals (Sweden)

    Muhammed Tayyib Kadak

    2013-03-01

    Full Text Available Autism is a genetically transferred neurodevelopmental disorder characterized by severe and permanent deficits in many interpersonal relation areas like communication, social interaction and emotional responsiveness. Patients with autism have deficits in face recognition, eye contact and recognition of emotional expression. Both recognition of face and expression of facial emotion carried on face processing. Structural and functional impairment in fusiform gyrus, amygdala, superior temporal sulcus and other brain regions lead to deficits in recognition of face and facial emotion. Therefore studies suggest that face processing deficits resulted in problems in areas of social interaction and emotion in autism. Studies revealed that children with autism had problems in recognition of facial expression and used mouth region more than eye region. It was also shown that autistic patients interpreted ambiguous expressions as negative emotion. In autism, deficits related in various stages of face processing like detection of gaze, face identity, recognition of emotional expression were determined, so far. Social interaction impairments in autistic spectrum disorders originated from face processing deficits during the periods of infancy, childhood and adolescence. Recognition of face and expression of facial emotion could be affected either automatically by orienting towards faces after birth, or by “learning” processes in developmental periods such as identity and emotion processing. This article aimed to review neurobiological basis of face processing and recognition of emotional facial expressions during normal development and in autism.

  8. Simultaneous facial feature tracking and facial expression recognition.

    Science.gov (United States)

    Li, Yongqiang; Wang, Shangfei; Zhao, Yongping; Ji, Qiang

    2013-07-01

    The tracking and recognition of facial activities from images or videos have attracted great attention in computer vision field. Facial activities are characterized by three levels. First, in the bottom level, facial feature points around each facial component, i.e., eyebrow, mouth, etc., capture the detailed face shape information. Second, in the middle level, facial action units, defined in the facial action coding system, represent the contraction of a specific set of facial muscles, i.e., lid tightener, eyebrow raiser, etc. Finally, in the top level, six prototypical facial expressions represent the global facial muscle movement and are commonly used to describe the human emotion states. In contrast to the mainstream approaches, which usually only focus on one or two levels of facial activities, and track (or recognize) them separately, this paper introduces a unified probabilistic framework based on the dynamic Bayesian network to simultaneously and coherently represent the facial evolvement in different levels, their interactions and their observations. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, all three levels of facial activities are simultaneously recognized through a probabilistic inference. Extensive experiments are performed to illustrate the feasibility and effectiveness of the proposed model on all three level facial activities.

  9. Fusing Facial Features for Face Recognition

    Directory of Open Access Journals (Sweden)

    Jamal Ahmad Dargham

    2012-06-01

    Full Text Available Face recognition is an important biometric method because of its potential applications in many fields, such as access control, surveillance, and human-computer interaction. In this paper, a face recognition system that fuses the outputs of three face recognition systems based on Gabor jets is presented. The first system uses the magnitude, the second uses the phase, and the third uses the phase-weighted magnitude of the jets. The jets are generated from facial landmarks selected using three selection methods. It was found out that fusing the facial features gives better recognition rate than either facial feature used individually regardless of the landmark selection method.

  10. Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia.

    Science.gov (United States)

    Palermo, Romina; Willis, Megan L; Rivolta, Davide; McKone, Elinor; Wilson, C Ellie; Calder, Andrew J

    2011-04-01

    We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and 'social'). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability.

  11. Facial expression recognition using thermal image.

    Science.gov (United States)

    Jiang, Guotai; Song, Xuemin; Zheng, Fuhui; Wang, Peipei; Omer, Ashgan

    2005-01-01

    Facial expression recognition will be studied in this paper using mathematics morphology, through drawing and analyzing the whole geometry characteristics and some geometry characteristics of the interesting area of Infrared Thermal Imaging (IRTI). The results show that geometry characteristic in the interesting region of different expression are obviously different; Facial temperature changes almost with the expression at the same time. Studies have shown feasibility of facial expression recognition on the basis of IRTI. This method can be used to monitor the facial expression in real time, which can be used in auxiliary diagnosis and medical on disease.

  12. Facial emotion recognition in paranoid schizophrenia and autism spectrum disorder.

    Science.gov (United States)

    Sachse, Michael; Schlitt, Sabine; Hainz, Daniela; Ciaramidaro, Angela; Walter, Henrik; Poustka, Fritz; Bölte, Sven; Freitag, Christine M

    2014-11-01

    Schizophrenia (SZ) and autism spectrum disorder (ASD) share deficits in emotion processing. In order to identify convergent and divergent mechanisms, we investigated facial emotion recognition in SZ, high-functioning ASD (HFASD), and typically developed controls (TD). Different degrees of task difficulty and emotion complexity (face, eyes; basic emotions, complex emotions) were used. Two Benton tests were implemented in order to elicit potentially confounding visuo-perceptual functioning and facial processing. Nineteen participants with paranoid SZ, 22 with HFASD and 20 TD were included, aged between 14 and 33 years. Individuals with SZ were comparable to TD in all obtained emotion recognition measures, but showed reduced basic visuo-perceptual abilities. The HFASD group was impaired in the recognition of basic and complex emotions compared to both, SZ and TD. When facial identity recognition was adjusted for, group differences remained for the recognition of complex emotions only. Our results suggest that there is a SZ subgroup with predominantly paranoid symptoms that does not show problems in face processing and emotion recognition, but visuo-perceptual impairments. They also confirm the notion of a general facial and emotion recognition deficit in HFASD. No shared emotion recognition deficit was found for paranoid SZ and HFASD, emphasizing the differential cognitive underpinnings of both disorders.

  13. Robust facial expression recognition via compressive sensing.

    Science.gov (United States)

    Zhang, Shiqing; Zhao, Xiaoming; Lei, Bicheng

    2012-01-01

    Recently, compressive sensing (CS) has attracted increasing attention in the areas of signal processing, computer vision and pattern recognition. In this paper, a new method based on the CS theory is presented for robust facial expression recognition. The CS theory is used to construct a sparse representation classifier (SRC). The effectiveness and robustness of the SRC method is investigated on clean and occluded facial expression images. Three typical facial features, i.e., the raw pixels, Gabor wavelets representation and local binary patterns (LBP), are extracted to evaluate the performance of the SRC method. Compared with the nearest neighbor (NN), linear support vector machines (SVM) and the nearest subspace (NS), experimental results on the popular Cohn-Kanade facial expression database demonstrate that the SRC method obtains better performance and stronger robustness to corruption and occlusion on robust facial expression recognition tasks.

  14. Efficient Facial Expression and Face Recognition using Ranking Method

    Directory of Open Access Journals (Sweden)

    Murali Krishna kanala

    2015-06-01

    Full Text Available Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However, these facial expressions may be difficult to detect to the untrained eye. In this paper we implements facial expression recognition techniques using Ranking Method. The human face plays an important role in our social interaction, conveying people's identity. Using human face as a key to security, the biometrics face recognition technology has received significant attention in the past several years. Experiments are performed using standard database like surprise, sad and happiness. The universally accepted three principal emotions to be recognized are: surprise, sad and happiness along with neutral.

  15. Wavelet based approach for facial expression recognition

    Directory of Open Access Journals (Sweden)

    Zaenal Abidin

    2015-03-01

    Full Text Available Facial expression recognition is one of the most active fields of research. Many facial expression recognition methods have been developed and implemented. Neural networks (NNs have capability to undertake such pattern recognition tasks. The key factor of the use of NN is based on its characteristics. It is capable in conducting learning and generalizing, non-linear mapping, and parallel computation. Backpropagation neural networks (BPNNs are the approach methods that mostly used. In this study, BPNNs were used as classifier to categorize facial expression images into seven-class of expressions which are anger, disgust, fear, happiness, sadness, neutral and surprise. For the purpose of feature extraction tasks, three discrete wavelet transforms were used to decompose images, namely Haar wavelet, Daubechies (4 wavelet and Coiflet (1 wavelet. To analyze the proposed method, a facial expression recognition system was built. The proposed method was tested on static images from JAFFE database.

  16. Facial Recognition in Uncontrolled Conditions for Information Security

    Directory of Open Access Journals (Sweden)

    Qinghan Xiao

    2010-01-01

    Full Text Available With the increasing use of computers nowadays, information security is becoming an important issue for private companies and government organizations. Various security technologies have been developed, such as authentication, authorization, and auditing. However, once a user logs on, it is assumed that the system would be controlled by the same person. To address this flaw, we developed a demonstration system that uses facial recognition technology to periodically verify the identity of the user. If the authenticated user's face disappears, the system automatically performs a log-off or screen-lock operation. This paper presents our further efforts in developing image preprocessing algorithms and dealing with angled facial images. The objective is to improve the accuracy of facial recognition under uncontrolled conditions. To compare the results with others, the frontal pose subset of the Face Recognition Technology (FERET database was used for the test. The experiments showed that the proposed algorithms provided promising results.

  17. Facial Recognition in Uncontrolled Conditions for Information Security

    Science.gov (United States)

    Xiao, Qinghan; Yang, Xue-Dong

    2010-12-01

    With the increasing use of computers nowadays, information security is becoming an important issue for private companies and government organizations. Various security technologies have been developed, such as authentication, authorization, and auditing. However, once a user logs on, it is assumed that the system would be controlled by the same person. To address this flaw, we developed a demonstration system that uses facial recognition technology to periodically verify the identity of the user. If the authenticated user's face disappears, the system automatically performs a log-off or screen-lock operation. This paper presents our further efforts in developing image preprocessing algorithms and dealing with angled facial images. The objective is to improve the accuracy of facial recognition under uncontrolled conditions. To compare the results with others, the frontal pose subset of the Face Recognition Technology (FERET) database was used for the test. The experiments showed that the proposed algorithms provided promising results.

  18. Traditional facial tattoos disrupt face recognition processes.

    Science.gov (United States)

    Buttle, Heather; East, Julie

    2010-01-01

    Factors that are important to successful face recognition, such as features, configuration, and pigmentation/reflectance, are all subject to change when a face has been engraved with ink markings. Here we show that the application of facial tattoos, in the form of spiral patterns (typically associated with the Maori tradition of a Moko), disrupts face recognition to a similar extent as face inversion, with recognition accuracy little better than chance performance (2AFC). These results indicate that facial tattoos can severely disrupt our ability to recognise a face that previously did not have the pattern.

  19. Facial expression recognition in perceptual color space.

    Science.gov (United States)

    Lajevardi, Seyed Mehdi; Wu, Hong Ren

    2012-08-01

    This paper introduces a tensor perceptual color framework (TPCF) for facial expression recognition (FER), which is based on information contained in color facial images. The TPCF enables multi-linear image analysis in different color spaces and demonstrates that color components provide additional information for robust FER. Using this framework, the components (in either RGB, YCbCr, CIELab or CIELuv space) of color images are unfolded to two-dimensional (2- D) tensors based on multi-linear algebra and tensor concepts, from which the features are extracted by Log-Gabor filters. The mutual information quotient (MIQ) method is employed for feature selection. These features are classified using a multi-class linear discriminant analysis (LDA) classifier. The effectiveness of color information on FER using low-resolution and facial expression images with illumination variations is assessed for performance evaluation. Experimental results demonstrate that color information has significant potential to improve emotion recognition performance due to the complementary characteristics of image textures. Furthermore, the perceptual color spaces (CIELab and CIELuv) are better overall for facial expression recognition than other color spaces by providing more efficient and robust performance for facial expression recognition using facial images with illumination variation.

  20. Mutual information-based facial expression recognition

    Science.gov (United States)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  1. Portable Facial Recognition Jukebox Using Fisherfaces (Frj

    Directory of Open Access Journals (Sweden)

    Richard Mo

    2016-03-01

    Full Text Available A portable real-time facial recognition system that is able to play personalized music based on the identified person’s preferences was developed. The system is called Portable Facial Recognition Jukebox Using Fisherfaces (FRJ. Raspberry Pi was used as the hardware platform for its relatively low cost and ease of use. This system uses the OpenCV open source library to implement the computer vision Fisherfaces facial recognition algorithms, and uses the Simple DirectMedia Layer (SDL library for playing the sound files. FRJ is cross-platform and can run on both Windows and Linux operating systems. The source code was written in C++. The accuracy of the recognition program can reach up to 90% under controlled lighting and distance conditions. The user is able to train up to 6 different people (as many as will fit in the GUI. When implemented on a Raspberry Pi, the system is able to go from image capture to facial recognition in an average time of 200ms.

  2. Facial emotion recognition in remitted depressed women.

    Science.gov (United States)

    Biyik, Utku; Keskin, Duygu; Oguz, Kaya; Akdeniz, Fisun; Gonul, Ali Saffet

    2015-10-01

    Although major depressive disorder (MDD) is primarily characterized by mood symptoms, depressed patients have impairments in facial emotion recognition in many of the basic emotions (anger, fear, happiness, surprise, disgust and sadness). On the other hand, the data in remitted MDD (rMDD) patients is inconsistent and it is not clear that if those impairments persist in remission. To extend the current findings, we applied facial emotion recognition test to a group of remitted depressed women and compared to those of controls. Analyses of variance results showed a significant emotion and group interaction, and in the post hoc analyses, rMDD patients had higher accuracy rate for recognition of sadness compared to those of controls. There were no differences in the reaction time among the patients and controls across the all the basic emotions. The higher recognition rates for sad faces in rMDD patients might contribute to the impairments in social communication and the prognosis of the disease.

  3. Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson's Disease

    National Research Council Canada - National Science Library

    Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul

    2016-01-01

    .... The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral...

  4. Face Recognition Based on Facial Features

    Directory of Open Access Journals (Sweden)

    Muhammad Sharif

    2012-08-01

    Full Text Available Commencing from the last decade several different methods have been planned and developed in the prospect of face recognition that is one of the chief stimulating zone in the area of image processing. Face recognitions processes have various applications in the prospect of security systems and crime investigation systems. The study is basically comprised of three phases, i.e., face detection, facial features extraction and face recognition. The first phase is the face detection process where region of interest i.e., features region is extracted. The 2nd phase is features extraction. Here face features i.e., eyes, nose and lips are extracted out commencing the extracted face area. The last module is the face recognition phase which makes use of the extracted left eye for the recognition purpose by combining features of Eigenfeatures and Fisherfeatures.

  5. Computer Recognition of Facial Profiles

    Science.gov (United States)

    1974-08-01

    z30 o u u cr W, 137 REFERENCES 1. Fischer , C. L., Pollock, D. K., Raddack, B., and Stevens, M. E., Optical Character Recognition, Spartan Books...K. W., and Haworth , P. A., "Automatic Shape betectinn for Programmed Terrain Classifica-’ tion," Proc. Soc. Photographic Instrumentation Engrs

  6. Facial Recognition Technology: An analysis with scope in India

    CERN Document Server

    Thorat, S B; Dandale, Jyoti P

    2010-01-01

    A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the way is to do this is by comparing selected facial features from the image and a facial database.It is typically used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems. In this paper we focus on 3-D facial recognition system and biometric facial recognision system. We do critics on facial recognision system giving effectiveness and weaknesses. This paper also introduces scope of recognision system in India.

  7. Face recognition using facial expression: a novel approach

    Science.gov (United States)

    Singh, Deepak Kumar; Gupta, Priya; Tiwary, U. S.

    2008-04-01

    Facial expressions are undoubtedly the most effective nonverbal communication. The face has always been the equation of a person's identity. The face draws the demarcation line between identity and extinction. Each line on the face adds an attribute to the identity. These lines become prominent when we experience an emotion and these lines do not change completely with age. In this paper we have proposed a new technique for face recognition which focuses on the facial expressions of the subject to identify his face. This is a grey area on which not much light has been thrown earlier. According to earlier researches it is difficult to alter the natural expression. So our technique will be beneficial for identifying occluded or intentionally disguised faces. The test results of the experiments conducted prove that this technique will give a new direction in the field of face recognition. This technique will provide a strong base to the area of face recognition and will be used as the core method for critical defense security related issues.

  8. Concurrent development of facial identity and expression discrimination

    OpenAIRE

    Dalrymple, Kirsten A.; Visconti di Oleggio Castello, Matteo; Elison, Jed T.; Gobbini, M. Ida

    2017-01-01

    Facial identity and facial expression processing both appear to follow a protracted developmental trajectory, yet these trajectories have been studied independently and have not been directly compared. Here we investigated whether these processes develop at the same or different rates using matched identity and expression discrimination tasks. The Identity task begins with a target face that is a morph between two identities (Identity A/Identity B). After a brief delay, the target face is rep...

  9. Infrared facial recognition technology being pushed toward emerging applications

    Science.gov (United States)

    Evans, David C.

    1997-02-01

    Human identification is a two-step process of initial identity assignment and later verification or recognition. The positive identification requirement is a major part of the classic security, legal, banking, and police task of granting or denying access to a facility, authority to take an action or, in police work, to identify or verify the identity of an individual. To meet this requirement, a three-part research and development (R&D) effort was undertaken Betac International Corporation, through its subsidiaries of Betac Corporation and Technology Recognition Systems, to develop an automated access control system using infrared (IR) facial images to verify the identity of an individual in real time. The system integrates IR facial imaging and a computer-based matching algorithm to perform the human recognition task rapidly, accurately, and nonintrusively, based on three basic principles: every human IR facial image (or thermogram) is unique to that individual; an IR camera can be used to capture human thermograms; and captured thermograms can be digitized, stored, and matched using a computer and mathematical algorithms. The first part of the development effort, an operator-assisted IR image matching proof-of-concept demonstration, was successfully completed in the spring of 1994. The second part of the R&D program, the design and evaluation of a prototype automated access control unit using the IR image matching technology, was completed in April 1995. This paper describes the final development effort to identify, assess, and evaluate the availability and suitability of robust image matching algorithms capable of supporting and enhancing the use of IR facial recognition technology. The most promising mature and available image matching algorithm was integrated into a demonstration access control unit (ACU) using a state-of-the-art IR imager and a performance evaluation was compared against that of a prototype automated ACU using a less robust algorithm and a

  10. Mobile-Customer Identity Recognition

    Institute of Scientific and Technical Information of China (English)

    LI Zhan; XU Ji-sheng; XU Min; SUN Hong

    2005-01-01

    By utilizing artificial intelligence and pattern recognition techniques, we propose an integrated mobile-customer identity recognition approach in this paper, based on customer's behavior characteristics extracted from the customer information database. To verify the effectiveness of this approach, a test has been run on the dataset consisting of 1 000 customers in 3 consecutive months. The result is compared with the real dataset in the fourth month consisting of 162 customers, which has been set as the customers for recognition. The high correct rate of the test (96.30%), together with 1.87% of the judge-by-mistake rate and 7.82% of the leaving-out rate, demonstrates the effectiveness of this approach.

  11. The Relationships between Processing Facial Identity, Emotional Expression, Facial Speech, and Gaze Direction during Development

    Science.gov (United States)

    Spangler, Sibylle M.; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding…

  12. Identity information content depends on the type of facial movement

    Science.gov (United States)

    Dobs, Katharina; Bülthoff, Isabelle; Schultz, Johannes

    2016-09-01

    Facial movements convey information about many social cues, including identity. However, how much information about a person’s identity is conveyed by different kinds of facial movements is unknown. We addressed this question using a recent motion capture and animation system, with which we animated one avatar head with facial movements of three types: (1) emotional, (2) emotional in social interaction and (3) conversational, all recorded from several actors. In a delayed match-to-sample task, observers were best at matching actor identity across conversational movements, worse with emotional movements in social interactions, and at chance level with emotional facial expressions. Model observers performing this task showed similar performance profiles, indicating that performance variation was due to differences in information content, rather than processing. Our results suggest that conversational facial movements transmit more dynamic identity information than emotional facial expressions, thus suggesting different functional roles and processing mechanisms for different types of facial motion.

  13. Unified Model in Identity Subspace for Face Recognition

    Institute of Scientific and Technical Information of China (English)

    Pin Liao; Li Shen; Yi-Qiang Chen; Shu-Chang Liu

    2004-01-01

    Human faces have two important characteristics: (1) They are similar objects and the specific variations of each face are similar to each other; (2) They are nearly bilateral symmetric. Exploiting the two important properties, we build a unified model in identity subspace (UMIS) as a novel technique for face recognition from only one example image per person. An identity subspace spanned by bilateral symmetric bases, which compactly encodes identity information, is presented. The unified model, trained on an obtained training set with multiple samples per class from a known people group A, can be generalized well to facial images of unknown individuals,and can be used to recognize facial images from an unknown people group B with only one sample per subject.Extensive experimental results on two public databases (the Yale database and the Bern database) and our own database (the ICT-JDL database) demonstrate that the UMIS approach is significantly effective and robust for face recognition.

  14. The activation of visual memory for facial identity is task-dependent: evidence from human electrophysiology.

    Science.gov (United States)

    Zimmermann, Friederike G S; Eimer, Martin

    2014-05-01

    The question whether the recognition of individual faces is mandatory or task-dependent is still controversial. We employed the N250r component of the event-related potential as a marker of the activation of representations of facial identity in visual memory, in order to find out whether identity-related information from faces is encoded and maintained even when facial identity is task-irrelevant. Pairs of faces appeared in rapid succession, and the N250r was measured in response to repetitions of the same individual face, as compared to presentations of two different faces. In Experiment 1, an N250r was present in an identity matching task where identity information was relevant, but not when participants had to detect infrequent targets (inverted faces), and facial identity was task-irrelevant. This was the case not only for unfamiliar faces, but also for famous faces, suggesting that even famous face recognition is not as automatic as is often assumed. In Experiment 2, an N250r was triggered by repetitions of non-famous faces in a task where participants had to match the view of each face pair, and facial identity had to be ignored. This shows that when facial features have to be maintained in visual memory for a subsequent comparison, identity-related information is retained as well, even when it is irrelevant. Our results suggest that individual face recognition is neither fully mandatory nor completely task-dependent. Facial identity is encoded and maintained in tasks that involve visual memory for individual faces, regardless of the to-be-remembered feature. In tasks without this memory component, irrelevant visual identity information can be completely ignored. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. The relationships between processing facial identity, emotional expression, facial speech, and gaze direction during development.

    Science.gov (United States)

    Spangler, Sibylle M; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding facial speech, emotional expression, and gaze direction or, alternatively, according to facial speech, emotional expression, and gaze direction while disregarding facial identity. Reaction times showed that children and adults were able to direct their attention selectively to facial identity despite variations of other kinds of face information, but when sorting according to facial speech and emotional expression, they were unable to ignore facial identity. In contrast, gaze direction could be processed independently of facial identity in all age groups. Apart from shorter reaction times and fewer classification errors, no substantial change in processing facial information was found to be correlated with age. We conclude that adult-like face processing routes are employed from 5 years of age onward.

  16. Meta-Analysis of the First Facial Expression Recognition Challenge

    NARCIS (Netherlands)

    Valstar, M.F.; Mehu, M.; Jiang, Bihan; Pantic, Maja; Scherer, K.

    2012-01-01

    Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability

  17. Violent Media Consumption and the Recognition of Dynamic Facial Expressions

    Science.gov (United States)

    Kirsh, Steven J.; Mounts, Jeffrey R. W.; Olczak, Paul V.

    2006-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph.…

  18. Meta-Analysis of the First Facial Expression Recognition Challenge

    NARCIS (Netherlands)

    Valstar, M.F.; Mehu, M.; Jiang, Bihan; Pantic, Maja; Scherer, K.

    Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability

  19. Violent Media Consumption and the Recognition of Dynamic Facial Expressions

    Science.gov (United States)

    Kirsh, Steven J.; Mounts, Jeffrey R. W.; Olczak, Paul V.

    2006-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph.…

  20. Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems

    Directory of Open Access Journals (Sweden)

    Muhammad Hameed Siddiqi

    2013-12-01

    Full Text Available Over the last decade, human facial expressions recognition (FER has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER.

  1. Facial Affect Recognition and Social Anxiety in Preschool Children

    Science.gov (United States)

    Ale, Chelsea M.; Chorney, Daniel B.; Brice, Chad S.; Morris, Tracy L.

    2010-01-01

    Research relating anxiety and facial affect recognition has focused mostly on school-aged children and adults and has yielded mixed results. The current study sought to demonstrate an association among behavioural inhibition and parent-reported social anxiety, shyness, social withdrawal and facial affect recognition performance in 30 children,…

  2. Regression-based Multi-View Facial Expression Recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja

    2010-01-01

    We present a regression-based scheme for multi-view facial expression recognition based on 2-D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a state-of-the-

  3. Facial Affect Recognition and Social Anxiety in Preschool Children

    Science.gov (United States)

    Ale, Chelsea M.; Chorney, Daniel B.; Brice, Chad S.; Morris, Tracy L.

    2010-01-01

    Research relating anxiety and facial affect recognition has focused mostly on school-aged children and adults and has yielded mixed results. The current study sought to demonstrate an association among behavioural inhibition and parent-reported social anxiety, shyness, social withdrawal and facial affect recognition performance in 30 children,…

  4. Influences on Facial Emotion Recognition in Deaf Children

    Science.gov (United States)

    Sidera, Francesc; Amadó, Anna; Martínez, Laura

    2017-01-01

    This exploratory research is aimed at studying facial emotion recognition abilities in deaf children and how they relate to linguistic skills and the characteristics of deafness. A total of 166 participants (75 deaf) aged 3-8 years were administered the following tasks: facial emotion recognition, naming vocabulary and cognitive ability. The…

  5. Regression-based Multi-View Facial Expression Recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja

    2010-01-01

    We present a regression-based scheme for multi-view facial expression recognition based on 2蚠D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a

  6. Application of data fusion in computer facial recognition

    Directory of Open Access Journals (Sweden)

    Wang Ai Qiang

    2013-11-01

    Full Text Available The recognition rate of single recognition method is inefficiency in computer facial recognition. We proposed a new confluent facial recognition method using data fusion technology, a variety of recognition algorithm are combined to form the fusion-based face recognition system to improve the recognition rate in many ways. Data fusion considers three levels of data fusion, feature level fusion and decision level fusion. And the data layer uses a simple weighted average algorithm, which is easy to implement. Artificial neural network algorithm was selected in feature layer and fuzzy reasoning algorithm was used in decision layer. Finally, we compared with the BP neural network algorithm in the MATLAB experimental platform. The result shows that the recognition rate has been greatly improved after adopting data fusion technology in computer facial recognition.

  7. Facial expression recognition based on improved deep belief networks

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.

  8. Slowing down facial movements and vocal sounds enhances facial expression recognition and facial-vocal imitation in children with autism

    OpenAIRE

    Tardif, Carole; Lainé, France; Rodriguez, Mélissa; Gepner, Bruno

    2007-01-01

    International audience; This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on CD-Rom, under audio or silent conditions, and under dynamic visual conditions (slowly, very slowly, at normal speed) plus a st...

  9. Facial identity and facial expression are initially integrated at visual perceptual stages of face processing.

    Science.gov (United States)

    Fisher, Katie; Towler, John; Eimer, Martin

    2016-01-08

    It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently.

  10. Facial Expression Recognition in Nonvisual Imagery

    Science.gov (United States)

    Olague, Gustavo; Hammoud, Riad; Trujillo, Leonardo; Hernández, Benjamín; Romero, Eva

    This chapter presents two novel approaches that allow computer vision applications to perform human facial expression recognition (FER). From a prob lem standpoint, we focus on FER beyond the human visual spectrum, in long-wave infrared imagery, thus allowing us to offer illumination-independent solutions to this important human-computer interaction problem. From a methodological stand point, we introduce two different feature extraction techniques: a principal com ponent analysis-based approach with automatic feature selection and one based on texture information selected by an evolutionary algorithm. In the former, facial fea tures are selected based on interest point clusters, and classification is carried out us ing eigenfeature information; in the latter, an evolutionary-based learning algorithm searches for optimal regions of interest and texture features based on classification accuracy. Both of these approaches use a support vector machine-committee for classification. Results show effective performance for both techniques, from which we can conclude that thermal imagery contains worthwhile information for the FER problem beyond the human visual spectrum.

  11. FACIAL LANDMARKING LOCALIZATION FOR EMOTION RECOGNITION USING BAYESIAN SHAPE MODELS

    Directory of Open Access Journals (Sweden)

    Hernan F. Garcia

    2013-02-01

    Full Text Available This work presents a framework for emotion recognition, based in facial expression analysis using Bayesian Shape Models (BSM for facial landmarking localization. The Facial Action Coding System (FACS compliant facial feature tracking based on Bayesian Shape Model. The BSM estimate the parameters of the model with an implementation of the EM algorithm. We describe the characterization methodology from parametric model and evaluated the accuracy for feature detection and estimation of the parameters associated with facial expressions, analyzing its robustness in pose and local variations. Then, a methodology for emotion characterization is introduced to perform the recognition. The experimental results show that the proposed model can effectively detect the different facial expressions. Outperforming conventional approaches for emotion recognition obtaining high performance results in the estimation of emotion present in a determined subject. The model used and characterization methodology showed efficient to detect the emotion type in 95.6% of the cases.

  12. Facial Expression Recognition Using Stationary Wavelet Transform Features

    Directory of Open Access Journals (Sweden)

    Huma Qayyum

    2017-01-01

    Full Text Available Humans use facial expressions to convey personal feelings. Facial expressions need to be automatically recognized to design control and interactive applications. Feature extraction in an accurate manner is one of the key steps in automatic facial expression recognition system. Current frequency domain facial expression recognition systems have not fully utilized the facial elements and muscle movements for recognition. In this paper, stationary wavelet transform is used to extract features for facial expression recognition due to its good localization characteristics, in both spectral and spatial domains. More specifically a combination of horizontal and vertical subbands of stationary wavelet transform is used as these subbands contain muscle movement information for majority of the facial expressions. Feature dimensionality is further reduced by applying discrete cosine transform on these subbands. The selected features are then passed into feed forward neural network that is trained through back propagation algorithm. An average recognition rate of 98.83% and 96.61% is achieved for JAFFE and CK+ dataset, respectively. An accuracy of 94.28% is achieved for MS-Kinect dataset that is locally recorded. It has been observed that the proposed technique is very promising for facial expression recognition when compared to other state-of-the-art techniques.

  13. Facial Emotion Recognition in Bipolar Disorder and Healthy Aging.

    Science.gov (United States)

    Altamura, Mario; Padalino, Flavia A; Stella, Eleonora; Balzotti, Angela; Bellomo, Antonello; Palumbo, Rocco; Di Domenico, Alberto; Mammarella, Nicola; Fairfield, Beth

    2016-03-01

    Emotional face recognition is impaired in bipolar disorder, but it is not clear whether this is specific for the illness. Here, we investigated how aging and bipolar disorder influence dynamic emotional face recognition. Twenty older adults, 16 bipolar patients, and 20 control subjects performed a dynamic affective facial recognition task and a subsequent rating task. Participants pressed a key as soon as they were able to discriminate whether the neutral face was assuming a happy or angry facial expression and then rated the intensity of each facial expression. Results showed that older adults recognized happy expressions faster, whereas bipolar patients recognized angry expressions faster. Furthermore, both groups rated emotional faces more intensely than did the control subjects. This study is one of the first to compare how aging and clinical conditions influence emotional facial recognition and underlines the need to consider the role of specific and common factors in emotional face recognition.

  14. Meta-Analysis of the First Facial Expression Recognition Challenge.

    Science.gov (United States)

    Valstar, M F; Mehu, M; Bihan Jiang; Pantic, M; Scherer, K

    2012-08-01

    Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability have received some attention; for instance, there exist a number of commonly used facial expression databases. However, lack of a commonly accepted evaluation protocol and, typically, lack of sufficient details needed to reproduce the reported individual results make it difficult to compare systems. This, in turn, hinders the progress of the field. A periodical challenge in facial expression recognition would allow such a comparison on a level playing field. It would provide an insight on how far the field has come and would allow researchers to identify new goals, challenges, and targets. This paper presents a meta-analysis of the first such challenge in automatic recognition of facial expressions, held during the IEEE conference on Face and Gesture Recognition 2011. It details the challenge data, evaluation protocol, and the results attained in two subchallenges: AU detection and classification of facial expression imagery in terms of a number of discrete emotion categories. We also summarize the lessons learned and reflect on the future of the field of facial expression recognition in general and on possible future challenges in particular.

  15. The review and results of different methods for facial recognition

    Science.gov (United States)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  16. Facial expressions recognition with an emotion expressive robotic head

    Science.gov (United States)

    Doroftei, I.; Adascalitei, F.; Lefeber, D.; Vanderborght, B.; Doroftei, I. A.

    2016-08-01

    The purpose of this study is to present the preliminary steps in facial expressions recognition with a new version of an expressive social robotic head. So, in a first phase, our main goal was to reach a minimum level of emotional expressiveness in order to obtain nonverbal communication between the robot and human by building six basic facial expressions. To evaluate the facial expressions, the robot was used in some preliminary user studies, among children and adults.

  17. Facial emotional recognition in schizophrenia: preliminary results of the virtual reality program for facial emotional recognition

    Directory of Open Access Journals (Sweden)

    Teresa Souto

    2013-01-01

    Full Text Available BACKGROUND: Significant deficits in emotional recognition and social perception characterize patients with schizophrenia and have direct negative impact both in inter-personal relationships and in social functioning. Virtual reality, as a methodological resource, might have a high potential for assessment and training skills in people suffering from mental illness. OBJECTIVES: To present preliminary results of a facial emotional recognition assessment designed for patients with schizophrenia, using 3D avatars and virtual reality. METHODS: Presentation of 3D avatars which reproduce images developed with the FaceGen® software and integrated in a three-dimensional virtual environment. Each avatar was presented to a group of 12 patients with schizophrenia and a reference group of 12 subjects without psychiatric pathology. RESULTS: The results show that the facial emotions of happiness and anger are better recognized by both groups and that the major difficulties arise in fear and disgust recognition. Frontal alpha electroencephalography variations were found during the presentation of anger and disgust stimuli among patients with schizophrenia. DISCUSSION: The developed program evaluation module can be of surplus value both for patient and therapist, providing the task execution in a non anxiogenic environment, however similar to the actual experience.

  18. Development of Emotional Facial Recognition in Late Childhood and Adolescence

    Science.gov (United States)

    Thomas, Laura A.; De Bellis, Michael D.; Graham, Reiko; Labar, Kevin S.

    2007-01-01

    The ability to interpret emotions in facial expressions is crucial for social functioning across the lifespan. Facial expression recognition develops rapidly during infancy and improves with age during the preschool years. However, the developmental trajectory from late childhood to adulthood is less clear. We tested older children, adolescents…

  19. Development of Emotional Facial Recognition in Late Childhood and Adolescence

    Science.gov (United States)

    Thomas, Laura A.; De Bellis, Michael D.; Graham, Reiko; Labar, Kevin S.

    2007-01-01

    The ability to interpret emotions in facial expressions is crucial for social functioning across the lifespan. Facial expression recognition develops rapidly during infancy and improves with age during the preschool years. However, the developmental trajectory from late childhood to adulthood is less clear. We tested older children, adolescents…

  20. Automatic Recognition of Facial Actions in Spontaneous Expressions

    Directory of Open Access Journals (Sweden)

    Marian Stewart Bartlett

    2006-09-01

    Full Text Available Spontaneous facial expressions differ from posed expressions in both which muscles are moved, and in the dynamics of the movement. Advances in the field of automatic facial expression measurement will require development and assessment on spontaneous behavior. Here we present preliminary results on a task of facial action detection in spontaneous facial expressions. We employ a user independent fully automatic system for real time recognition of facial actions from the Facial Action Coding System (FACS. The system automatically detects frontal faces in the video stream and coded each frame with respect to 20 Action units. The approach applies machine learning methods such as support vector machines and AdaBoost, to texture-based image representations. The output margin for the learned classifiers predicts action unit intensity. Frame-by-frame intensity measurements will enable investigations into facial expression dynamics which were previously intractable by human coding.

  1. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  2. Face recognition using improved-LDA with facial combined feature

    Institute of Scientific and Technical Information of China (English)

    Dake Zhou; Xin Yang; Ningsong Peng

    2005-01-01

    @@ Face recognition subjected to various conditions is a challenging task. This paper presents a combined feature improved Fisher classifier method for face recognition. Both of the facial holistic information and local information are used for face representation. In addition, the improved linear discriminant analysis (I-LDA) is employed for good generalization capability. Experiments show that the method is not only robust to moderate changes of illumination, pose and facial expression but also superior to the traditional methods, such as eigenfaces and Fisherfaces.

  3. Facial emotion recognition impairments in individuals with HIV.

    Science.gov (United States)

    Clark, Uraina S; Cohen, Ronald A; Westbrook, Michelle L; Devlin, Kathryn N; Tashima, Karen T

    2010-11-01

    Characterized by frontostriatal dysfunction, human immunodeficiency virus (HIV) is associated with cognitive and psychiatric abnormalities. Several studies have noted impaired facial emotion recognition abilities in patient populations that demonstrate frontostriatal dysfunction; however, facial emotion recognition abilities have not been systematically examined in HIV patients. The current study investigated facial emotion recognition in 50 nondemented HIV-seropositive adults and 50 control participants relative to their performance on a nonemotional landscape categorization control task. We examined the relation of HIV-disease factors (nadir and current CD4 levels) to emotion recognition abilities and assessed the psychosocial impact of emotion recognition abnormalities. Compared to control participants, HIV patients performed normally on the control task but demonstrated significant impairments in facial emotion recognition, specifically for fear. HIV patients reported greater psychosocial impairments, which correlated with increased emotion recognition difficulties. Lower current CD4 counts were associated with poorer anger recognition. In summary, our results indicate that chronic HIV infection may contribute to emotion processing problems among HIV patients. We suggest that disruptions of frontostriatal structures and their connections with cortico-limbic networks may contribute to emotion recognition abnormalities in HIV. Our findings also highlight the significant psychosocial impact that emotion recognition abnormalities have on individuals with HIV.

  4. Facial Recognition using OpenCV

    Directory of Open Access Journals (Sweden)

    Valentin Petrut Suciu

    2012-03-01

    Full Text Available

    The growing interest in computer vision of the past decade. Fueled by the steady doubling rate of computing power every 13 months, face detection and recognition has transcended from an esoteric to a popular area of research in computer vision and one of the better and successful applications of image analysis and algorithm based understanding. Because of the intrinsic nature of the problem, computer vision is not only a computer science area of research, but also the object of neuro-scientific and psychological studies, mainly because of the general opinion that advances in computer image processing and understanding research will provide insights into how our brain work and vice versa.

    Because of general curiosity and interest in the matter, the author has proposed to create an application that would allow user access to a particular machine based on an in-depth analysis of a person’s facial features. This application will be developed using Intel’s open source computer vision project, OpenCV and Microsoft’s .NET framework.

  5. Facial Recognition using OpenCV

    Directory of Open Access Journals (Sweden)

    Shervin Emami

    2012-03-01

    Full Text Available The growing interest in computer vision of the past decade. Fueled by the steady doubling rate of computing power every 13 months, face detection and recognition has transcended from an esoteric to a popular area of research in computer vision and one of the better and successful applications of image analysis and algorithm based understanding. Because of the intrinsic nature of the problem, computer vision is not only a computer science area of research, but also the object of neuro-scientific and psychological studies, mainly because of the general opinion that advances in computer image processing and understanding research will provide insights into how our brain work and vice versa. Because of general curiosity and interest in the matter, the author has proposed to create an application that would allow user access to a particular machine based on an in-depth analysis of a person’s facial features. This application will be developed using Intel’s open source computer vision project, OpenCV and Microsoft’s .NET framework.

  6. Visual adaptation provides objective electrophysiological evidence of facial identity discrimination.

    Science.gov (United States)

    Retter, Talia L; Rossion, Bruno

    2016-07-01

    Discrimination of facial identities is a fundamental function of the human brain that is challenging to examine with macroscopic measurements of neural activity, such as those obtained with functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). Although visual adaptation or repetition suppression (RS) stimulation paradigms have been successfully implemented to this end with such recording techniques, objective evidence of an identity-specific discrimination response due to adaptation at the level of the visual representation is lacking. Here, we addressed this issue with fast periodic visual stimulation (FPVS) and EEG recording combined with a symmetry/asymmetry adaptation paradigm. Adaptation to one facial identity is induced through repeated presentation of that identity at a rate of 6 images per second (6 Hz) over 10 sec. Subsequently, this identity is presented in alternation with another facial identity (i.e., its anti-face, both faces being equidistant from an average face), producing an identity repetition rate of 3 Hz over a 20 sec testing sequence. A clear EEG response at 3 Hz is observed over the right occipito-temporal (ROT) cortex, indexing discrimination between the two facial identities in the absence of an explicit behavioral discrimination measure. This face identity discrimination occurs immediately after adaptation and disappears rapidly within 20 sec. Importantly, this 3 Hz response is not observed in a control condition without the single-identity 10 sec adaptation period. These results indicate that visual adaptation to a given facial identity produces an objective (i.e., at a pre-defined stimulation frequency) electrophysiological index of visual discrimination between that identity and another, and provides a unique behavior-free quantification of the effect of visual adaptation.

  7. Comparison of Emotion Recognition from Facial Expression and Music

    OpenAIRE

    Gašpar, Tina; Labor, Marina; Jurić, Iva; Dumančić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recogni...

  8. Comparison of Emotion Recognition from Facial Expression and Music

    OpenAIRE

    Gašpar, Tina; Labor, Marina; Jurić, Iva; Dumančić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recogni...

  9. A motivational determinant of facial emotion recognition: regulatory focus affects recognition of emotions in faces.

    Science.gov (United States)

    Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G; Scheiter, Katharina; Jarodzka, Halszka

    2014-01-01

    Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition.

  10. Automatic Facial Expression Recognition Based on Hybrid Approach

    Directory of Open Access Journals (Sweden)

    Ali K. K. Bermani

    2012-12-01

    Full Text Available The topic of automatic recognition of facial expressions deduce a lot of researchers in the late last century and has increased a great interest in the past few years. Several techniques have emerged in order to improve the efficiency of the recognition by addressing problems in face detection and extraction features in recognizing expressions. This paper has proposed automatic system for facial expression recognition which consists of hybrid approach in feature extraction phase which represent a combination between holistic and analytic approaches by extract 307 facial expression features (19 features by geometric, 288 feature by appearance. Expressions recognition is performed by using radial basis function (RBF based on artificial neural network to recognize the six basic emotions (anger, fear, disgust, happiness, surprise, sadness in addition to the natural.The system achieved recognition rate 97.08% when applying on person-dependent database and 93.98% when applying on person-independent.

  11. [Developmental change in facial recognition by premature infants during infancy].

    Science.gov (United States)

    Konishi, Yukihiko; Kusaka, Takashi; Nishida, Tomoko; Isobe, Kenichi; Itoh, Susumu

    2014-09-01

    Premature infants are thought to be at increased risk for developmental disorders. We evaluated facial recognition by premature infants during early infancy, as this ability has been reported to be impaired commonly in developmentally disabled children. In premature infants and full-term infants at the age of 4 months (4 corrected months for premature infants), visual behaviors while performing facial recognition tasks were determined and analyzed using an eye-tracking system (Tobii T60 manufactured by Tobii Technologics, Sweden). Both types of infants had a preference towards normal facial expressions; however, no preference towards the upper face was observed in premature infants. Our study suggests that facial recognition ability in premature infants may develop differently from that in full-term infants.

  12. Recognition of social identity in ants

    DEFF Research Database (Denmark)

    Bos, Nick; d'Ettorre, Patrizia

    2012-01-01

    Recognizing the identity of others, from the individual to the group level, is a hallmark of society. Ants, and other social insects, have evolved advanced societies characterized by efficient social recognition systems. Colony identity is mediated by colony specific signature mixtures, a blend...

  13. [Measuring impairment of facial affects recognition in schizophrenia. Preliminary study of the facial emotions recognition task (TREF)].

    Science.gov (United States)

    Gaudelus, B; Virgile, J; Peyroux, E; Leleu, A; Baudouin, J-Y; Franck, N

    2015-06-01

    The impairment of social cognition, including facial affects recognition, is a well-established trait in schizophrenia, and specific cognitive remediation programs focusing on facial affects recognition have been developed by different teams worldwide. However, even though social cognitive impairments have been confirmed, previous studies have also shown heterogeneity of the results between different subjects. Therefore, assessment of personal abilities should be measured individually before proposing such programs. Most research teams apply tasks based on facial affects recognition by Ekman et al. or Gur et al. However, these tasks are not easily applicable in a clinical exercise. Here, we present the Facial Emotions Recognition Test (TREF), which is designed to identify facial affects recognition impairments in a clinical practice. The test is composed of 54 photos and evaluates abilities in the recognition of six universal emotions (joy, anger, sadness, fear, disgust and contempt). Each of these emotions is represented with colored photos of 4 different models (two men and two women) at nine intensity levels from 20 to 100%. Each photo is presented during 10 seconds; no time limit for responding is applied. The present study compared the scores of the TREF test in a sample of healthy controls (64 subjects) and people with stabilized schizophrenia (45 subjects) according to the DSM IV-TR criteria. We analysed global scores for all emotions, as well as sub scores for each emotion between these two groups, taking into account gender differences. Our results were coherent with previous findings. Applying TREF, we confirmed an impairment in facial affects recognition in schizophrenia by showing significant differences between the two groups in their global results (76.45% for healthy controls versus 61.28% for people with schizophrenia), as well as in sub scores for each emotion except for joy. Scores for women were significantly higher than for men in the population

  14. Role of temporal processing stages by inferior temporal neurons in facial recognition

    Directory of Open Access Journals (Sweden)

    Yasuko eSugase-Miyamoto

    2011-06-01

    Full Text Available In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses.In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of

  15. Facial Emotion Recognition in Child Psychiatry: A Systematic Review

    Science.gov (United States)

    Collin, Lisa; Bindra, Jasmeet; Raju, Monika; Gillberg, Christopher; Minnis, Helen

    2013-01-01

    This review focuses on facial affect (emotion) recognition in children and adolescents with psychiatric disorders other than autism. A systematic search, using PRISMA guidelines, was conducted to identify original articles published prior to October 2011 pertaining to face recognition tasks in case-control studies. Used in the qualitative…

  16. Facial Emotion Recognition in Child Psychiatry: A Systematic Review

    Science.gov (United States)

    Collin, Lisa; Bindra, Jasmeet; Raju, Monika; Gillberg, Christopher; Minnis, Helen

    2013-01-01

    This review focuses on facial affect (emotion) recognition in children and adolescents with psychiatric disorders other than autism. A systematic search, using PRISMA guidelines, was conducted to identify original articles published prior to October 2011 pertaining to face recognition tasks in case-control studies. Used in the qualitative…

  17. Facial emotion recognition and alexithymia in adults with somatoform disorders.

    Science.gov (United States)

    Pedrosa Gil, Francisco; Ridout, Nathan; Kessler, Henrik; Neuffer, Michaela; Schoechlin, Claudia; Traue, Harald C; Nickel, Marius

    2009-01-01

    The primary aim of this study was to investigate facial emotion recognition in patients with somatoform disorders (SFD). Also of interest was the extent to which concurrent alexithymia contributed to any changes in emotion recognition accuracy. Twenty patients with SFD and twenty healthy, age, sex and education matched, controls were assessed with the Facially Expressed Emotion Labelling Test of facial emotion recognition and the 26-item Toronto Alexithymia Scale (TAS-26). Patients with SFD exhibited elevated alexithymia symptoms relative to healthy controls. Patients with SFD also recognized significantly fewer emotional expressions than did the healthy controls. However, the group difference in emotion recognition accuracy became nonsignificant once the influence of alexithymia was controlled for statistically. This suggests that the deficit in facial emotion recognition observed in the patients with SFD was most likely a consequence of concurrent alexithymia. Impaired facial emotion recognition observed in the patients with SFD could plausibly have a negative influence on these individuals' social functioning. (c) 2008 Wiley-Liss, Inc.

  18. Recognition of social identity in ants

    DEFF Research Database (Denmark)

    Bos, Nick; d'Ettorre, Patrizia

    2012-01-01

    Recognizing the identity of others, from the individual to the group level, is a hallmark of society. Ants, and other social insects, have evolved advanced societies characterized by efficient social recognition systems. Colony identity is mediated by colony specific signature mixtures, a blend o...... is formed, where in the nervous system it is localized, and the possible role of learning. We combine seemingly contradictory evidence in to a novel, parsimonious theory for the information processing of nestmate recognition cues.......Recognizing the identity of others, from the individual to the group level, is a hallmark of society. Ants, and other social insects, have evolved advanced societies characterized by efficient social recognition systems. Colony identity is mediated by colony specific signature mixtures, a blend...

  19. Automatic recognition of facial movement for paralyzed face.

    Science.gov (United States)

    Wang, Ting; Dong, Junyu; Sun, Xin; Zhang, Shu; Wang, Shengke

    2014-01-01

    Facial nerve paralysis is a common disease due to nerve damage. Most approaches for evaluating the degree of facial paralysis rely on a set of different facial movements as commanded by doctors. Therefore, automatic recognition of the patterns of facial movement is fundamental to the evaluation of the degree of facial paralysis. In this paper, a novel method named Active Shape Models plus Local Binary Patterns (ASMLBP) is presented for recognizing facial movement patterns. Firstly, the Active Shape Models (ASMs) are used in the method to locate facial key points. According to these points, the face is divided into eight local regions. Then the descriptors of these regions are extracted by using Local Binary Patterns (LBP) to recognize the patterns of facial movement. The proposed ASMLBP method is tested on both the collected facial paralysis database with 57 patients and another publicly available database named the Japanese Female Facial Expression (JAFFE). Experimental results demonstrate that the proposed method is efficient for both paralyzed and normal faces.

  20. Facial Emotion Recognition Using Context Based Multimodal Approach

    Directory of Open Access Journals (Sweden)

    Priya Metri

    2011-12-01

    Full Text Available Emotions play a crucial role in person to person interaction. In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers. The ability to understand human emotions is desirable for the computer in several applications especially by observing facial expressions. This paper explores a ways of human-computer interaction that enable the computer to be more aware of the user’s emotional expressions we present a approach for the emotion recognition from a facial expression, hand and body posture. Our model uses multimodal emotion recognition system in which we use two different models for facial expression recognition and for hand and body posture recognition and then combining the result of both classifiers using a third classifier which give the resulting emotion . Multimodal system gives more accurate result than a signal or bimodal system

  1. Facial Gesture Recognition Using Correlation And Mahalanobis Distance

    CERN Document Server

    Kapoor, Supriya; Bhatia, Rahul

    2010-01-01

    Augmenting human computer interaction with automated analysis and synthesis of facial expressions is a goal towards which much research effort has been devoted recently. Facial gesture recognition is one of the important component of natural human-machine interfaces; it may also be used in behavioural science, security systems and in clinical practice. Although humans recognise facial expressions virtually without effort or delay, reliable expression recognition by machine is still a challenge. The face expression recognition problem is challenging because different individuals display the same expression differently. This paper presents an overview of gesture recognition in real time using the concepts of correlation and Mahalanobis distance.We consider the six universal emotional categories namely joy, anger, fear, disgust, sadness and surprise.

  2. Own- and Other-Race Face Identity Recognition in Children: The Effects of Pose and Feature Composition

    Science.gov (United States)

    Anzures, Gizelle; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; de Viviés, Xavier; Lee, Kang

    2014-01-01

    We used a matching-to-sample task and manipulated facial pose and feature composition to examine the other-race effect (ORE) in face identity recognition between 5 and 10 years of age. Overall, the present findings provide a genuine measure of own- and other-race face identity recognition in children that is independent of photographic and image…

  3. Own- and Other-Race Face Identity Recognition in Children: The Effects of Pose and Feature Composition

    Science.gov (United States)

    Anzures, Gizelle; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; de Viviés, Xavier; Lee, Kang

    2014-01-01

    We used a matching-to-sample task and manipulated facial pose and feature composition to examine the other-race effect (ORE) in face identity recognition between 5 and 10 years of age. Overall, the present findings provide a genuine measure of own- and other-race face identity recognition in children that is independent of photographic and image…

  4. Frame-Based Facial Expression Recognition Using Geometrical Features

    Directory of Open Access Journals (Sweden)

    Anwar Saeed

    2014-01-01

    Full Text Available To improve the human-computer interaction (HCI to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness, with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.

  5. Facial emotion recognition in patients with violent schizophrenia.

    Science.gov (United States)

    Demirbuga, Sedat; Sahin, Esat; Ozver, Ismail; Aliustaoglu, Suheyla; Kandemir, Eyup; Varkal, Mihriban D; Emul, Murat; Ince, Haluk

    2013-03-01

    People with schizophrenia are more likely considered to be violent than the general population. Besides some well described symptoms, patients with schizophrenia have problems in recognizing basic facial emotions which could underlie the misinterpretation of others' intentions that could lead to violent behaviors. We aimed to investigate the facial emotion recognition ability in violent or non-violent patients with schizophrenia. The severity in both groups was evaluated according to the Positive and Negative Syndrome Scale. A computer-based test included the photos of four male and four female models with happy, surprised, fearful, sad, angry, disgusted, and neutral facial expressions from Ekman & Friesen's series has been performed to groups. Totally, 41 outpatients with violent schizophrenia and 35 outpatients with non-violent schizophrenia participated in the study. The mean age of violent schizophrenia group was 41.50±7.56, and control group's mean age was 39.94±6.79years. There were no significant differences between groups among reaction time for each emotion while recognizing them (p>0.05). In addition, the accuracy rate of answers towards facial emotion recognition test for each emotion and the distribution misidentifications were not significantly different between groups (p>0.05). The facial emotion recognition in violent schizophrenia is lacking and we found that the facial emotion recognition ability in violent schizophrenia seems to be a trait feature of the illness. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Dissociating Face Identity and Facial Expression Processing Via Visual Adaptation

    Directory of Open Access Journals (Sweden)

    Hong Xu

    2012-10-01

    Full Text Available Face identity and facial expression are processed in two distinct neural pathways. However, most of the existing face adaptation literature studies them separately, despite the fact that they are two aspects from the same face. The current study conducted a systematic comparison between these two aspects by face adaptation, investigating how top- and bottom-half face parts contribute to the processing of face identity and facial expression. A real face (sad, “Adam” and its two size-equivalent face parts (top- and bottom-half were used as the adaptor in separate conditions. For face identity adaptation, the test stimuli were generated by morphing Adam's sad face with another person's sad face (“Sam”. For facial expression adaptation, the test stimuli were created by morphing Adam's sad face with his neutral face and morphing the neutral face with his happy face. In each trial, after exposure to the adaptor, observers indicated the perceived face identity or facial expression of the following test face via a key press. They were also tested in a baseline condition without adaptation. Results show that the top- and bottom-half face each generated a significant face identity aftereffect. However, the aftereffect by top-half face adaptation is much larger than that by the bottom-half face. On the contrary, only the bottom-half face generated a significant facial expression aftereffect. This dissociation of top- and bottom-half face adaptation suggests that face parts play different roles in face identity and facial expression. It thus provides further evidence for the distributed systems of face perception.

  7. Face Processing in Children with Autism Spectrum Disorder: Independent or Interactive Processing of Facial Identity and Facial Expression?

    Science.gov (United States)

    Krebs, Julia F.; Biswas, Ajanta; Pascalis, Olivier; Kamp-Becker, Inge; Remschmidt, Helmuth; Schwarzer, Gudrun

    2011-01-01

    The current study investigated if deficits in processing emotional expression affect facial identity processing and vice versa in children with autism spectrum disorder. Children with autism and IQ and age matched typically developing children classified faces either by emotional expression, thereby ignoring facial identity or by facial identity…

  8. Development of Facial Emotion Recognition in Childhood : Age-related Differences in a Shortened Version of the Facial Expressions of Emotion - Stimuli and Tests

    NARCIS (Netherlands)

    Coenen, Maraike; Aarnoudse, Ceciel; Huitema, Rients; Braams, Olga; Veenstra, Wencke S.

    2013-01-01

    Introduction Facial emotion recognition is essential for social interaction. The development of emotion recognition abilities is not yet entirely understood (Tonks et al. 2007). Facial emotion recognition emerges gradually, with happiness recognized earliest (Herba & Phillips, 2004). The recognition

  9. Development of Facial Emotion Recognition in Childhood : Age-related Differences in a Shortened Version of the Facial Expressions of Emotion - Stimuli and Tests

    NARCIS (Netherlands)

    Coenen, Maraike; Aarnoudse, Ceciel; Huitema, Rients; Braams, Olga; Veenstra, Wencke S.

    2013-01-01

    Introduction Facial emotion recognition is essential for social interaction. The development of emotion recognition abilities is not yet entirely understood (Tonks et al. 2007). Facial emotion recognition emerges gradually, with happiness recognized earliest (Herba & Phillips, 2004). The recognition

  10. Comparison of emotion recognition from facial expression and music.

    Science.gov (United States)

    Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.

  11. Facial recognition software success rates for the identification of 3D surface reconstructed facial images: implications for patient privacy and security.

    Science.gov (United States)

    Mazura, Jan C; Juluru, Krishna; Chen, Joseph J; Morgan, Tara A; John, Majnu; Siegel, Eliot L

    2012-06-01

    Image de-identification has focused on the removal of textual protected health information (PHI). Surface reconstructions of the face have the potential to reveal a subject's identity even when textual PHI is absent. This study assessed the ability of a computer application to match research subjects' 3D facial reconstructions with conventional photographs of their face. In a prospective study, 29 subjects underwent CT scans of the head and had frontal digital photographs of their face taken. Facial reconstructions of each CT dataset were generated on a 3D workstation. In phase 1, photographs of the 29 subjects undergoing CT scans were added to a digital directory and tested for recognition using facial recognition software. In phases 2-4, additional photographs were added in groups of 50 to increase the pool of possible matches and the test for recognition was repeated. As an internal control, photographs of all subjects were tested for recognition against an identical photograph. Of 3D reconstructions, 27.5% were matched correctly to corresponding photographs (95% upper CL, 40.1%). All study subject photographs were matched correctly to identical photographs (95% lower CL, 88.6%). Of 3D reconstructions, 96.6% were recognized simply as a face by the software (95% lower CL, 83.5%). Facial recognition software has the potential to recognize features on 3D CT surface reconstructions and match these with photographs, with implications for PHI.

  12. Facial Expression Recognition Teaching to Preschoolers with Autism

    DEFF Research Database (Denmark)

    Christinaki, Eirini; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2013-01-01

    The recognition of facial expressions is important for the perception of emotions. Understanding emotions is essential in human communication and social interaction. Children with autism have been reported to exhibit deficits in the recognition of affective expressions. Their difficulties...... for teaching emotion recognition from facial expressions should occur as early as possible in order to be successful and to have a positive effect. It is claimed that Serious Games can be very effective in the areas of therapy and education for children with autism. However, those computer interventions...... an educational computer game, which provides physical interaction by employing natural user interface (NUI), we aim to support early intervention and to foster facial expression learning....

  13. Non-Cooperative Facial Recognition Video Dataset Collection Plan

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, Marcia L.; Erikson, Rebecca L.; Lombardo, Nicholas J.

    2013-08-31

    The Pacific Northwest National Laboratory (PNNL) will produce a non-cooperative (i.e. not posing for the camera) facial recognition video data set for research purposes to evaluate and enhance facial recognition systems technology. The aggregate data set consists of 1) videos capturing PNNL role players and public volunteers in three key operational settings, 2) photographs of the role players for enrolling in an evaluation database, and 3) ground truth data that documents when the role player is within various camera fields of view. PNNL will deliver the aggregate data set to DHS who may then choose to make it available to other government agencies interested in evaluating and enhancing facial recognition systems. The three operational settings that will be the focus of the video collection effort include: 1) unidirectional crowd flow 2) bi-directional crowd flow, and 3) linear and/or serpentine queues.

  14. Impaired facial emotion recognition in a ketamine model of psychosis.

    Science.gov (United States)

    Ebert, Andreas; Haussleiter, Ida Sibylle; Juckel, Georg; Brüne, Martin; Roser, Patrik

    2012-12-30

    Social cognitive disabilities are a common feature in schizophrenia. Given the role of glutamatergic neurotransmission in schizophrenia-related cognitive impairments, we investigated the effects of the glutamatergic NMDA receptor antagonist ketamine on facial emotion recognition. Eighteen healthy male subjects were tested on two occasions, one without medication and one after administration with subanesthetic doses of intravenous ketamine. Emotion recognition was examined using the Ekman 60 Faces Test. In addition, attention was measured by the Continuous Performance Test (CPT), and psychopathology was rated using the Psychotomimetic States Inventory (PSI). Ketamine produced a non-significant deterioration of global emotion recognition abilities. Specifically, the ability to correctly identify the facial expression of sadness was significantly reduced in the ketamine condition. These results were independent of psychotic symptoms and selective attention. Our results point to the involvement of the glutamatergic system in the ability to recognize facial emotions. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  15. Facial Expression Recognition of Various Internal States via Manifold Learning

    Institute of Scientific and Technical Information of China (English)

    Young-Suk Shin

    2009-01-01

    Emotions are becoming increasingly important in human-centered interaction architectures. Recognition of facial expressions, which are central to human-computer interactions, seems natural and desirable. However, facial expressions include mixed emotions, continuous rather than discrete, which vary from moment to moment. This paper represents a novel method of recognizing facial expressions of various internal states via manifold learning, to achieve the aim of humancentered interaction studies. A critical review of widely used emotion models is described, then, facial expression features of various internal states via the locally linear embedding (LLE) are extracted. The recognition of facial expressions is created with the pleasure-displeasure and arousal-sleep dimensions in a two-dimensional model of emotion. The recognition result of various internal state expressions that mapped to the embedding space via the LLE algorithm can effectively represent the structural nature of the two-dimensional model of emotion. Therefore our research has established that the relationship between facial expressions of various internal states can be elaborated in the two-dimensional model of emotion, via the locally linear embedding algorithm.

  16. Fully automatic recognition of the temporal phases of facial actions.

    Science.gov (United States)

    Valstar, Michel F; Pantic, Maja

    2012-02-01

    Past work on automatic analysis of facial expressions has focused mostly on detecting prototypic expressions of basic emotions like happiness and anger. The method proposed here enables the detection of a much larger range of facial behavior by recognizing facial muscle actions [action units (AUs)] that compound expressions. AUs are agnostic, leaving the inference about conveyed intent to higher order decision making (e.g., emotion recognition). The proposed fully automatic method not only allows the recognition of 22 AUs but also explicitly models their temporal characteristics (i.e., sequences of temporal segments: neutral, onset, apex, and offset). To do so, it uses a facial point detector based on Gabor-feature-based boosted classifiers to automatically localize 20 facial fiducial points. These points are tracked through a sequence of images using a method called particle filtering with factorized likelihoods. To encode AUs and their temporal activation models based on the tracking data, it applies a combination of GentleBoost, support vector machines, and hidden Markov models. We attain an average AU recognition rate of 95.3% when tested on a benchmark set of deliberately displayed facial expressions and 72% when tested on spontaneous expressions.

  17. Violent video game play impacts facial emotion recognition.

    Science.gov (United States)

    Kirsh, Steven J; Mounts, Jeffrey R W

    2007-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent video game play. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph. Typically, happy faces are identified faster than angry faces (the happy-face advantage). Results indicated that playing a violent video game led to a reduction in the happy face advantage. Implications of these findings are discussed with respect to the current models of aggressive behavior.

  18. Recognition of facial affect in girls with conduct disorder.

    Science.gov (United States)

    Pajer, Kathleen; Leininger, Lisa; Gardner, William

    2010-02-28

    Impaired recognition of facial affect has been reported in youths and adults with antisocial behavior. However, few of these studies have examined subjects with the psychiatric disorders associated with antisocial behavior, and there are virtually no data on females. Our goal was to determine if facial affect recognition was impaired in adolescent girls with conduct disorder (CD). Performance on the Ekman Pictures of Facial Affect (POFA) task was compared in 35 girls with CD (mean age of 17.9 years+/-0.95; 38.9% African-American) and 30 girls who had no lifetime history of psychiatric disorder (mean age of 17.6 years+/-0.77; 30% African-American). Forty-five slides representing the six emotions in the POFA were presented one at a time; stimulus duration was 5s. Multivariate analyses indicated that CD vs. control status was not significantly associated with the total number of correct answers nor the number of correct answers for any specific emotion. Effect sizes were all considered small. Within-CD analyses did not demonstrate a significant effect for aggressive antisocial behavior on facial affect recognition. Our findings suggest that girls with CD are not impaired in facial affect recognition. However, we did find that girls with a history of trauma/neglect made a greater number of errors in recognizing fearful faces. Explanations for these findings are discussed and implications for future research presented. 2009 Elsevier B.V. All rights reserved.

  19. GENDER DIFFERENCES IN THE RECOGNITION OF FACIAL EXPRESSIONS OF EMOTION

    Directory of Open Access Journals (Sweden)

    CARLOS FELIPE PARDO-VÉLEZ

    2003-07-01

    Full Text Available Gender differences in the recognition of facial expressions of anger, happiness and sadness wereresearched in students 18-25 years of age. A reaction time procedure was used, and the percentage ofcorrect answers when recognizing was also measured. Though the work hypothesis expected genderdifferences in facial expression recognition, results suggest that these differences are not significant at alevel of 0.05%. Statistical analysis shows a greater easiness (at a non-significant level for women torecognize happiness expressions, and for men to recognize anger expressions. The implications ofthese data are discussed, and possible extensions of this investigation in terms of sample size andcollege major of the participants.

  20. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    Science.gov (United States)

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  1. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    Science.gov (United States)

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  2. Facial emotion recognition in adolescents with personality pathology.

    Science.gov (United States)

    Berenschot, Fleur; van Aken, Marcel A G; Hessels, Christel; de Castro, Bram Orobio; Pijl, Ysbrand; Montagne, Barbara; van Voorst, Guus

    2014-07-01

    It has been argued that a heightened emotional sensitivity interferes with the cognitive processing of facial emotion recognition and may explain the intensified emotional reactions to external emotional stimuli of adults with personality pathology, such as borderline personality disorder (BPD). This study examines if and how deviations in facial emotion recognition also occur in adolescents with personality pathology. Forty-two adolescents with personality pathology, 111 healthy adolescents and 28 psychiatric adolescents without personality pathology completed the Emotion Recognition Task, measuring their accuracy and sensitivity in recognizing positive and negative emotion expressions presented in several, morphed, expression intensities. Adolescents with personality pathology showed an enhanced recognition accuracy of facial emotion expressions compared to healthy adolescents and clients with various Axis-I psychiatric diagnoses. They were also more sensitive to less intensive expressions of emotions than clients with various Axis-I psychiatric diagnoses, but not more than healthy adolescents. As has been shown in research on adults with BPD, adolescents with personality pathology show enhanced facial emotion recognition.

  3. Facial recognition technology safeguards Beijing Olympics

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    @@ To ensure the safety of spectators and athletes at the biggest-ever Olympic Games, automation experts from CAS have developed China's first system to identify individuals by their facial features, and successfully applied it to the opening night security check on 8 August in Beijing.

  4. Enhanced recognition of facial recognitions of disgust in opiate users.

    OpenAIRE

    Martin, L.

    2005-01-01

    This literature review focuses on the research relating to facial expressions of emotion, first addressing the question of what they are and what role they play, before going on to review the mechanisms by which they are recognised in others. It then considers the psychiatric and drug-using populations in which the ability to recognise facial expressions is compromised, and how this corresponds to the social behaviour that characterises these groups. Finally, this review will focus on one par...

  5. The first facial expression recognition and analysis challenge

    NARCIS (Netherlands)

    Valstar, Michel F.; Jiang, Bihan; Mehu, Marc; Pantic, Maja; Scherer, Klaus

    2011-01-01

    Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly u

  6. The first facial expression recognition and analysis challenge

    NARCIS (Netherlands)

    Valstar, Michel F.; Jiang, Bihan; Mehu, Marc; Pantic, Maja; Scherer, Klaus

    Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly

  7. Automatic recognition of emotions from facial expressions

    Science.gov (United States)

    Xue, Henry; Gertner, Izidor

    2014-06-01

    In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).

  8. Intelligent Facial Recognition Systems: Technology advancements for security applications

    Energy Technology Data Exchange (ETDEWEB)

    Beer, C.L.

    1993-07-01

    Insider problems such as theft and sabotage can occur within the security and surveillance realm of operations when unauthorized people obtain access to sensitive areas. A possible solution to these problems is a means to identify individuals (not just credentials or badges) in a given sensitive area and provide full time personnel accountability. One approach desirable at Department of Energy facilities for access control and/or personnel identification is an Intelligent Facial Recognition System (IFRS) that is non-invasive to personnel. Automatic facial recognition does not require the active participation of the enrolled subjects, unlike most other biological measurement (biometric) systems (e.g., fingerprint, hand geometry, or eye retinal scan systems). It is this feature that makes an IFRS attractive for applications other than access control such as emergency evacuation verification, screening, and personnel tracking. This paper discusses current technology that shows promising results for DOE and other security applications. A survey of research and development in facial recognition identified several companies and universities that were interested and/or involved in the area. A few advanced prototype systems were also identified. Sandia National Laboratories is currently evaluating facial recognition systems that are in the advanced prototype stage. The initial application for the evaluation is access control in a controlled environment with a constant background and with cooperative subjects. Further evaluations will be conducted in a less controlled environment, which may include a cluttered background and subjects that are not looking towards the camera. The outcome of the evaluations will help identify areas of facial recognition systems that need further development and will help to determine the effectiveness of the current systems for security applications.

  9. Fingerprint recognition with identical twin fingerprints.

    Directory of Open Access Journals (Sweden)

    Xunqiang Tao

    Full Text Available Fingerprint recognition with identical twins is a challenging task due to the closest genetics-based relationship existing in the identical twins. Several pioneers have analyzed the similarity between twins' fingerprints. In this work we continue to investigate the topic of the similarity of identical twin fingerprints. Our study was tested based on a large identical twin fingerprint database that contains 83 twin pairs, 4 fingers per individual and six impressions per finger: 3984 (83*2*4*6 images. Compared to the previous work, our contributions are summarized as follows: (1 Two state-of-the-art fingerprint identification methods: P071 and VeriFinger 6.1 were used, rather than one fingerprint identification method in previous studies. (2 Six impressions per finger were captured, rather than just one impression, which makes the genuine distribution of matching scores more realistic. (3 A larger sample (83 pairs was collected. (4 A novel statistical analysis, which aims at showing the probability distribution of the fingerprint types for the corresponding fingers of identical twins which have same fingerprint type, has been conducted. (5 A novel analysis, which aims at showing which finger from identical twins has higher probability of having same fingerprint type, has been conducted. Our results showed that: (a A state-of-the-art automatic fingerprint verification system can distinguish identical twins without drastic degradation in performance. (b The chance that the fingerprints have the same type from identical twins is 0.7440, comparing to 0.3215 from non-identical twins. (c For the corresponding fingers of identical twins which have same fingerprint type, the probability distribution of five major fingerprint types is similar to the probability distribution for all the fingers' fingerprint type. (d For each of four fingers of identical twins, the probability of having same fingerprint type is similar.

  10. Gender identity rather than sexual orientation impacts on facial preferences.

    Science.gov (United States)

    Ciocca, Giacomo; Limoncin, Erika; Cellerino, Alessandro; Fisher, Alessandra D; Gravina, Giovanni Luca; Carosa, Eleonora; Mollaioli, Daniele; Valenzano, Dario R; Mennucci, Andrea; Bandini, Elisa; Di Stasi, Savino M; Maggi, Mario; Lenzi, Andrea; Jannini, Emmanuele A

    2014-10-01

    Differences in facial preferences between heterosexual men and women are well documented. It is still a matter of debate, however, how variations in sexual identity/sexual orientation may modify the facial preferences. This study aims to investigate the facial preferences of male-to-female (MtF) individuals with gender dysphoria (GD) and the influence of short-term/long-term relationships on facial preference, in comparison with healthy subjects. Eighteen untreated MtF subjects, 30 heterosexual males, 64 heterosexual females, and 42 homosexual males from university students/staff, at gay events, and in Gender Clinics were shown a composite male or female face. The sexual dimorphism of these pictures was stressed or reduced in a continuous fashion through an open-source morphing program with a sequence of 21 pictures of the same face warped from a feminized to a masculinized shape. An open-source morphing program (gtkmorph) based on the X-Morph algorithm. MtF GD subjects and heterosexual females showed the same pattern of preferences: a clear preference for less dimorphic (more feminized) faces for both short- and long-term relationships. Conversely, both heterosexual and homosexual men selected significantly much more dimorphic faces, showing a preference for hyperfeminized and hypermasculinized faces, respectively. These data show that the facial preferences of MtF GD individuals mirror those of the sex congruent with their gender identity. Conversely, heterosexual males trace the facial preferences of homosexual men, indicating that changes in sexual orientation do not substantially affect preference for the most attractive faces. © 2014 International Society for Sexual Medicine.

  11. Featural processing in recognition of emotional facial expressions.

    Science.gov (United States)

    Beaudry, Olivia; Roy-Charland, Annie; Perron, Melanie; Cormier, Isabelle; Tapp, Roxane

    2014-04-01

    The present study aimed to clarify the role played by the eye/brow and mouth areas in the recognition of the six basic emotions. In Experiment 1, accuracy was examined while participants viewed partial and full facial expressions; in Experiment 2, participants viewed full facial expressions while their eye movements were recorded. Recognition rates were consistent with previous research: happiness was highest and fear was lowest. The mouth and eye/brow areas were not equally important for the recognition of all emotions. More precisely, while the mouth was revealed to be important in the recognition of happiness and the eye/brow area of sadness, results are not as consistent for the other emotions. In Experiment 2, consistent with previous studies, the eyes/brows were fixated for longer periods than the mouth for all emotions. Again, variations occurred as a function of the emotions, the mouth having an important role in happiness and the eyes/brows in sadness. The general pattern of results for the other four emotions was inconsistent between the experiments as well as across different measures. The complexity of the results suggests that the recognition process of emotional facial expressions cannot be reduced to a simple feature processing or holistic processing for all emotions.

  12. Efficient Web-based Facial Recognition System Employing 2DHOG

    CERN Document Server

    Abdelwahab, Moataz M; Yousry, Islam

    2012-01-01

    In this paper, a system for facial recognition to identify missing and found people in Hajj and Umrah is described as a web portal. Explicitly, we present a novel algorithm for recognition and classifications of facial images based on applying 2DPCA to a 2D representation of the Histogram of oriented gradients (2D-HOG) which maintains the spatial relation between pixels of the input images. This algorithm allows a compact representation of the images which reduces the computational complexity and the storage requirments, while maintaining the highest reported recognition accuracy. This promotes this method for usage with very large datasets. Large dataset was collected for people in Hajj. Experimental results employing ORL, UMIST, JAFFE, and HAJJ datasets confirm these excellent properties.

  13. Younger and Older Users’ Recognition of Virtual Agent Facial Expressions

    Science.gov (United States)

    Beer, Jenay M.; Smarr, Cory-Ann; Fisk, Arthur D.; Rogers, Wendy A.

    2015-01-01

    As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent’s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell, Sullivan, Prevost, & Churchill, 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck & Reichenbach, 2005; Courgeon et al. 2009; 2011; Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent’s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a

  14. Facial expression recognition in Alzheimer's disease: a longitudinal study.

    Science.gov (United States)

    Torres, Bianca; Santos, Raquel Luiza; Sousa, Maria Fernanda Barroso de; Simões Neto, José Pedro; Nogueira, Marcela Moreira Lima; Belfort, Tatiana T; Dias, Rachel; Dourado, Marcia Cristina Nascimento

    2015-05-01

    Facial recognition is one of the most important aspects of social cognition. In this study, we investigate the patterns of change and the factors involved in the ability to recognize emotion in mild Alzheimer's disease (AD). Through a longitudinal design, we assessed 30 people with AD. We used an experimental task that includes matching expressions with picture stimuli, labelling emotions and emotionally recognizing a stimulus situation. We observed a significant difference in the situational recognition task (p ≤ 0.05) between baseline and the second evaluation. The linear regression showed that cognition is a predictor of emotion recognition impairment (p ≤ 0.05). The ability to perceive emotions from facial expressions was impaired, particularly when the emotions presented were relatively subtle. Cognition is recruited to comprehend emotional situations in cases of mild dementia.

  15. Facial Expression Recognition Teaching to Preschoolers with Autism

    DEFF Research Database (Denmark)

    Christinaki, Eirini; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2013-01-01

    for teaching emotion recognition from facial expressions should occur as early as possible in order to be successful and to have a positive effect. It is claimed that Serious Games can be very effective in the areas of therapy and education for children with autism. However, those computer interventions...... require considerable skills for interaction. Before the age of 6, most children with autism do not have such basic motor skills in order to manipulate a mouse or a keyboard. Our approach takes account of the specific characteristics of preschoolers with autism and their physical inabilities. By creating......The recognition of facial expressions is important for the perception of emotions. Understanding emotions is essential in human communication and social interaction. Children with autism have been reported to exhibit deficits in the recognition of affective expressions. Their difficulties...

  16. Identity Restored: Nesmin's Forensic Facial Reconstruction in Context

    Directory of Open Access Journals (Sweden)

    Branislav Anđelković

    2016-03-01

    Full Text Available A wide range of archaeological human remains stay, for the most part, anonymous and are consequently treated as objects of analysis; not as dead people. With the growing availability of medical imaging and rapidly developing computer technology, 3D digital facial reconstruction, as a noninvasive form of study, offers a successful method of recreating faces from mummified human remains. Forensic facial reconstruction has been utilized for various purposes in scientific investigation, including restoring the physical appearance of the people of ancient civilizations which is an important aspect of their individual identity. Restoring the identity of the Belgrade mummy started in 1991. Along with the absolute dating, gender, age, name, rank and provenance, we also established his genealogy. The owner of Cairo stela 22053 discovered at Akhmim in 1885, and the Belgrade coffin purchased in Luxor in 1888, in which the mummy rests, have been identified as the very same person. Forensic facial reconstruction was used to reproduce, with the highest possible degree of accuracy, the facial appearance of the mummy Nesmin, ca. 300 B.C., a priest from Akhmim, when he was alive.

  17. Holistic face processing can inhibit recognition of forensic facial composites.

    Science.gov (United States)

    McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H

    2016-04-01

    Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format.

  18. [Neurobiological basis of human recognition of facial emotion].

    Science.gov (United States)

    Mikhaĭlova, E S

    2005-01-01

    In the review of modern data and ideas concerning the neurophysiological mechanisms and morphological foundations of the most essential communicative function of humans and monkeys, that of recognition of faces and their emotional expressions, the attention is focussed on its dynamic realization and structural provision. On the basis of literature data about hemodynamic and metabolic mapping of the brain the author analyses the role of different zones of the ventral and dorsal visual cortical pathway, the frontal neocortex and amigdala in the facial features processing, as well as the specificity of this processing at each level. Special attention is given to the module principle of the facial processing in the temporal cortex. The dynamic characteristics of facial recognition are discussed according to the electrical evoked response data in healthy and disease humans and monkeys. Modern evidences on the role of different brain structures in the generation of successive evoked response waves in connection with successive stages of facial processing are analyzed. The similarity and differences between mechanisms of recognition of faces and their emotional expression are also considered.

  19. Facial Expression Recognition Based on WAPA and OEPA Fastica

    Directory of Open Access Journals (Sweden)

    Humayra Binte Ali

    2014-06-01

    Full Text Available Face is one of the most important biometric traits for its uniqueness and robustness. For this reason researchers from many diversified fields, like: security, psychology, image processing, and computer vision, started to do research on face detection as well as facial expression recognition. Subspace learning methods work very good for recognizing same facial features. Among subspace learning techniques PCA, ICA, NMF are the most prominent topics. In this work, our main focus is on Independent Component Analysis (ICA. Among several architectures of ICA,we used here FastICA and LS-ICA algorithm. We applied Fast-ICA on whole faces and on different facial parts to analyze the influence of different parts for basic facial expressions. Our extended algorithm WAPA-FastICA and OEPA-FastICA are discussed in proposed algorithm section. Locally Salient ICA is implemented on whole face by using 8x8 windows to find the more prominent facial features for facial expression. The experiment shows our proposed OEPA-FastICA and WAPA-FastICA outperform the existing prevalent Whole-FastICA and LS-ICA methods.

  20. Dynamic Approaches for Facial Recognition Using Digital Image Speckle Correlation

    Science.gov (United States)

    Rafailovich-Sokolov, Sara; Guan, E.; Afriat, Isablle; Rafailovich, Miriam; Sokolov, Jonathan; Clark, Richard

    2004-03-01

    Digital image analysis techniques have been extensively used in facial recognition. To date, most static facial characterization techniques, which are usually based on Fourier transform techniques, are sensitive to lighting, shadows, or modification of appearance by makeup, natural aging or surgery. In this study we have demonstrated that it is possible to uniquely identify faces by analyzing the natural motion of facial features with Digital Image Speckle Correlation (DISC). Human skin has a natural pattern produced by the texture of the skin pores, which is easily visible with conventional digital cameras of resolution greater than 4 mega pixels. Hence the application of the DISC method to the analysis of facial motion appears to be very straightforward. Here we demonstrate that the vector diagrams produced by this method for facial images are directly correlated to the underlying muscle structure which is unique for an individual and is not affected by lighting or make-up. Furthermore, we will show that this method can also be used for medical diagnosis in early detection of facial paralysis and other forms of skin disorders.

  1. Emotional facial expressions differentially influence predictions and performance for face recognition.

    Science.gov (United States)

    Nomi, Jason S; Rhodes, Matthew G; Cleary, Anne M

    2013-01-01

    This study examined how participants' predictions of future memory performance are influenced by emotional facial expressions. Participants made judgements of learning (JOLs) predicting the likelihood that they would correctly identify a face displaying a happy, angry, or neutral emotional expression in a future two-alternative forced-choice recognition test of identity (i.e., recognition that a person's face was seen before). JOLs were higher for studied faces with happy and angry emotional expressions than for neutral faces. However, neutral test faces with studied neutral expressions had significantly higher identity recognition rates than neutral test faces studied with happy or angry expressions. Thus, these data are the first to demonstrate that people believe happy and angry emotional expressions will lead to better identity recognition in the future relative to neutral expressions. This occurred despite the fact that neutral expressions elicited better identity recognition than happy and angry expressions. These findings contribute to the growing literature examining the interaction of cognition and emotion.

  2. Children with mixed language disorder do not discriminate accurately facial identity when expressions change.

    Science.gov (United States)

    Robel, Laurence; Vaivre-Douret, Laurence; Neveu, Xavier; Piana, Hélène; Perier, Antoine; Falissard, Bruno; Golse, Bernard

    2008-12-01

    We investigated the recognition of pairs of faces (same or different facial identities and expressions) in two groups of 14 children aged 6-10 years, with either an expressive language disorder (ELD), or a mixed language disorder (MLD), and two groups of 14 matched healthy controls. When looking at their global performances, children with either expressive (ELD) or MLD have few differences from controls in either face or emotional recognition. At contrary, we found that children with MLD, but not those with ELD, take identical faces to be different if their expressions change. Since children with mixed language disorders are socially more impaired than children with ELD, we think that these features may partly underpin the social difficulties of these children.

  3. Facial Expression Recognition Using 3D Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Young-Hyen Byeon

    2014-12-01

    Full Text Available This paper is concerned with video-based facial expression recognition frequently used in conjunction with HRI (Human-Robot Interaction that can naturally interact between human and robot. For this purpose, we design a 3D-CNN(3D Convolutional Neural Networks by augmenting dimensionality reduction methods such as PCA(Principal Component Analysis and TMPCA(Tensor-based Multilinear Principal Component Analysis to recognize simultaneously the successive frames with facial expression images obtained through video camera. The 3D-CNN can achieve some degree of shift and deformation invariance using local receptive fields and spatial subsampling through dimensionality reduction of redundant CNN’s output. The experimental results on video-based facial expression database reveal that the presented method shows a good performance in comparison to the conventional methods such as PCA and TMPCA.

  4. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yuan Shih

    2010-01-01

    Full Text Available This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA and quadratic discriminant analysis (QDA. It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  5. The Reliability of Facial Recognition of Deceased Persons on Photographs.

    Science.gov (United States)

    Caplova, Zuzana; Obertova, Zuzana; Gibelli, Daniele M; Mazzarelli, Debora; Fracasso, Tony; Vanezis, Peter; Sforza, Chiarella; Cattaneo, Cristina

    2017-09-01

    In humanitarian emergencies, such as the current deceased migrants in the Mediterranean, antemortem documentation needed for identification may be limited. The use of visual identification has been previously reported in cases of mass disasters such as Thai tsunami. This pilot study explores the ability of observers to match unfamiliar faces of living and dead persons and whether facial morphology can be used for identification. A questionnaire was given to 41 students and five professionals in the field of forensic identification with the task to choose whether a facial photograph corresponds to one of the five photographs in a lineup and to identify the most useful features used for recognition. Although the overall recognition score did not significantly differ between professionals and students, the median scores of 78.1% and 80.0%, respectively, were too low to consider this method as a reliable identification method and thus needs to be supported by other means. © 2017 American Academy of Forensic Sciences.

  6. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    Science.gov (United States)

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain.

  7. Face identity recognition in autism spectrum disorders: a review of behavioral studies.

    Science.gov (United States)

    Weigelt, Sarah; Koldewyn, Kami; Kanwisher, Nancy

    2012-03-01

    Face recognition--the ability to recognize a person from their facial appearance--is essential for normal social interaction. Face recognition deficits have been implicated in the most common disorder of social interaction: autism. Here we ask: is face identity recognition in fact impaired in people with autism? Reviewing behavioral studies we find no strong evidence for a qualitative difference in how facial identity is processed between those with and without autism: markers of typical face identity recognition, such as the face inversion effect, seem to be present in people with autism. However, quantitatively--i.e., how well facial identity is remembered or discriminated--people with autism perform worse than typical individuals. This impairment is particularly clear in face memory and in face perception tasks in which a delay intervenes between sample and test, and less so in tasks with no memory demand. Although some evidence suggests that this deficit may be specific to faces, further evidence on this question is necessary.

  8. Facial expression recognition based on improved local ternary pattern and stacked auto-encoder

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.

  9. Facial Expression Recognition Deficits and Faulty Learning: Implications for Theoretical Models and Clinical Applications

    Science.gov (United States)

    Sheaffer, Beverly L.; Golden, Jeannie A.; Averett, Paige

    2009-01-01

    The ability to recognize facial expressions of emotion is integral in social interaction. Although the importance of facial expression recognition is reflected in increased research interest as well as in popular culture, clinicians may know little about this topic. The purpose of this article is to discuss facial expression recognition literature…

  10. Recognition of facial and musical emotions in Parkinson's disease.

    Science.gov (United States)

    Saenz, A; Doé de Maindreville, A; Henry, A; de Labbey, S; Bakchine, S; Ehrlé, N

    2013-03-01

    Patients with amygdala lesions were found to be impaired in recognizing the fear emotion both from face and from music. In patients with Parkinson's disease (PD), impairment in recognition of emotions from facial expressions was reported for disgust, fear, sadness and anger, but no studies had yet investigated this population for the recognition of emotions from both face and music. The ability to recognize basic universal emotions (fear, happiness and sadness) from both face and music was investigated in 24 medicated patients with PD and 24 healthy controls. The patient group was tested for language (verbal fluency tasks), memory (digit and spatial span), executive functions (Similarities and Picture Completion subtests of the WAIS III, Brixton and Stroop tests), visual attention (Bells test), and fulfilled self-assessment tests for anxiety and depression. Results showed that the PD group was significantly impaired for recognition of both fear and sadness emotions from facial expressions, whereas their performance in recognition of emotions from musical excerpts was not different from that of the control group. The scores of fear and sadness recognition from faces were neither correlated to scores in tests for executive and cognitive functions, nor to scores in self-assessment scales. We attributed the observed dissociation to the modality (visual vs. auditory) of presentation and to the ecological value of the musical stimuli that we used. We discuss the relevance of our findings for the care of patients with PD. © 2012 The Author(s) European Journal of Neurology © 2012 EFNS.

  11. Comparing the Recognition of Emotional Facial Expressions in Patients with

    Directory of Open Access Journals (Sweden)

    Abdollah Ghasempour

    2014-05-01

    Full Text Available Background: Recognition of emotional facial expressions is one of the psychological factors which involve in obsessive-compulsive disorder (OCD and major depressive disorder (MDD. The aim of present study was to compare the ability of recognizing emotional facial expressions in patients with Obsessive-Compulsive Disorder and major depressive disorder. Materials and Methods: The present study is a cross-sectional and ex-post facto investigation (causal-comparative method. Forty participants (20 patients with OCD, 20 patients with MDD were selected through available sampling method from the clients referred to Tabriz Bozorgmehr clinic. Data were collected through Structured Clinical Interview and Recognition of Emotional Facial States test. The data were analyzed utilizing MANOVA. Results: The obtained results showed that there is no significant difference between groups in the mean score of recognition emotional states of surprise, sadness, happiness and fear; but groups had a significant difference in the mean score of diagnosing disgust and anger states (p<0.05. Conclusion: Patients suffering from both OCD and MDD show equal ability to recognize surprise, sadness, happiness and fear. However, the former are less competent in recognizing disgust and anger than the latter.

  12. Brain regions involved in processing facial identity and expression are differentially selective for surface and edge information.

    Science.gov (United States)

    Harris, Richard J; Young, Andrew W; Andrews, Timothy J

    2014-08-15

    Although different brain regions are widely considered to be involved in the recognition of facial identity and expression, it remains unclear how these regions process different properties of the visual image. Here, we ask how surface-based reflectance information and edge-based shape cues contribute to the perception and neural representation of facial identity and expression. Contrast-reversal was used to generate images in which normal contrast relationships across the surface of the image were disrupted, but edge information was preserved. In a behavioural experiment, contrast-reversal significantly attenuated judgements of facial identity, but only had a marginal effect on judgements of expression. An fMR-adaptation paradigm was then used to ask how brain regions involved in the processing of identity and expression responded to blocks comprising all normal, all contrast-reversed, or a mixture of normal and contrast-reversed faces. Adaptation in the posterior superior temporal sulcus--a region directly linked with processing facial expression--was relatively unaffected by mixing normal with contrast-reversed faces. In contrast, the response of the fusiform face area--a region linked with processing facial identity--was significantly affected by contrast-reversal. These results offer a new perspective on the reasons underlying the neural segregation of facial identity and expression in which brain regions involved in processing invariant aspects of faces, such as identity, are very sensitive to surface-based cues, whereas regions involved in processing changes in faces, such as expression, are relatively dependent on edge-based cues.

  13. Privacy in the Face of Surveillance: Fourth Amendment Considerations for Facial Recognition Technology

    Science.gov (United States)

    2015-03-01

    five decades later, are: “head rotation and tilt, lighting intensity and angle, facial expression , aging, etc.”51 47 Charlie Savage, “ Facial ...OF SURVEILLANCE: FOURTH AMENDMENT CONSIDERATIONS FOR FACIAL RECOGNITION TECHNOLOGY by Eric Z. Wynn March 2015 Thesis Advisor: Carolyn... FACIAL RECOGNITION TECHNOLOGY 6. AUTHOR(S) Eric Z. Wynn 7. PERFORMING ORGANIZATION NA:i\\tiE(S) AND ADDRESS(ES) Naval Postgraduate School Monterey

  14. Pain Recognition using Spatiotemporal Oriented Energy of Facial Muscles

    DEFF Research Database (Denmark)

    Irani, Ramin; Nasrollahi, Kamal; Moeslund, Thomas B.

    2015-01-01

    Pain is a critical sign in many medical situations and its automatic detection and recognition using computer vision techniques is of great importance. Utilizes this fact that pain is a spatiotemporal process, the proposed system in this paper employs steerable and separable filters to measures e...... energies released by the facial muscles during the pain process. The proposed system not only detects the pain but recognizes its level. Experimental results on the publicly available pain database of UNBC show promising outcome for automatic pain detection and recognition.......Pain is a critical sign in many medical situations and its automatic detection and recognition using computer vision techniques is of great importance. Utilizes this fact that pain is a spatiotemporal process, the proposed system in this paper employs steerable and separable filters to measures...

  15. Environmental Identity Development through Social Interactions, Action, and Recognition

    Science.gov (United States)

    Stapleton, Sarah Riggs

    2015-01-01

    This article uses sociocultural identity theory to explore how practice, action, and recognition can facilitate environmental identity development. Recognition, a construct not previously explored in environmental identity literature, is particularly examined. The study is based on a group of diverse teens who traveled to South Asia to participate…

  16. The Facial Expressive Action Stimulus Test. A test battery for the assessment of face memory, face and object perception, configuration processing, and facial expression recognition.

    Science.gov (United States)

    de Gelder, Beatrice; Huis In 't Veld, Elisabeth M J; Van den Stock, Jan

    2015-01-01

    There are many ways to assess face perception skills. In this study, we describe a novel task battery FEAST (Facial Expressive Action Stimulus Test) developed to test recognition of identity and expressions of human faces as well as stimulus control categories. The FEAST consists of a neutral and emotional face memory task, a face and shoe identity matching task, a face and house part-to-whole matching task, and a human and animal facial expression matching task. The identity and part-to-whole matching tasks contain both upright and inverted conditions. The results provide reference data of a healthy sample of controls in two age groups for future users of the FEAST.

  17. Relation between facial affect recognition and configural face processing in antipsychotic-free schizophrenia.

    Science.gov (United States)

    Fakra, Eric; Jouve, Elisabeth; Guillaume, Fabrice; Azorin, Jean-Michel; Blin, Olivier

    2015-03-01

    Deficit in facial affect recognition is a well-documented impairment in schizophrenia, closely connected to social outcome. This deficit could be related to psychopathology, but also to a broader dysfunction in processing facial information. In addition, patients with schizophrenia inadequately use configural information-a type of processing that relies on spatial relationships between facial features. To date, no study has specifically examined the link between symptoms and misuse of configural information in the deficit in facial affect recognition. Unmedicated schizophrenia patients (n = 30) and matched healthy controls (n = 30) performed a facial affect recognition task and a face inversion task, which tests aptitude to rely on configural information. In patients, regressions were carried out between facial affect recognition, symptom dimensions and inversion effect. Patients, compared with controls, showed a deficit in facial affect recognition and a lower inversion effect. Negative symptoms and lower inversion effect could account for 41.2% of the variance in facial affect recognition. This study confirms the presence of a deficit in facial affect recognition, and also of dysfunctional manipulation in configural information in antipsychotic-free patients. Negative symptoms and poor processing of configural information explained a substantial part of the deficient recognition of facial affect. We speculate that this deficit may be caused by several factors, among which independently stand psychopathology and failure in correctly manipulating configural information. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  18. Real Time Facial Expression Recognition Using a Novel Method

    Directory of Open Access Journals (Sweden)

    Saumil Srivastava

    2012-04-01

    Full Text Available This paper discusses a novel method for Facial Expression Recognition System which performs facial expression analysis in a near real time from a live web cam feed. Primary objectives were to get results in a near real time with light invariant, person independent and pose invariant way. The system is composed of two different entities trainer and evaluator. Each frame of video feed is passed through a series of steps including haar classifiers, skin detection, feature extraction, feature points tracking, creating a learned Support Vector Machine model to classify emotions to achieve a tradeoff between accuracy and result rate. A processing time of 100-120 ms per 10 frames was achieved with accuracy of around 60%. We measure our accuracy in terms of variety of interaction and classification scenarios. We conclude by discussing relevance of our work to human computer interaction and exploring further measures that can be taken.

  19. Impaired Facial Expression Recognition in Children with Temporal Lobe Epilepsy: Impact of Early Seizure Onset on Fear Recognition

    Science.gov (United States)

    Golouboff, Nathalie; Fiori, Nicole; Delalande, Olivier; Fohlen, Martine; Dellatolas, Georges; Jambaque, Isabelle

    2008-01-01

    The amygdala has been implicated in the recognition of facial emotions, especially fearful expressions, in adults with early-onset right temporal lobe epilepsy (TLE). The present study investigates the recognition of facial emotions in children and adolescents, 8-16 years old, with epilepsy. Twenty-nine subjects had TLE (13 right, 16 left) and…

  20. Impaired Facial Expression Recognition in Children with Temporal Lobe Epilepsy: Impact of Early Seizure Onset on Fear Recognition

    Science.gov (United States)

    Golouboff, Nathalie; Fiori, Nicole; Delalande, Olivier; Fohlen, Martine; Dellatolas, Georges; Jambaque, Isabelle

    2008-01-01

    The amygdala has been implicated in the recognition of facial emotions, especially fearful expressions, in adults with early-onset right temporal lobe epilepsy (TLE). The present study investigates the recognition of facial emotions in children and adolescents, 8-16 years old, with epilepsy. Twenty-nine subjects had TLE (13 right, 16 left) and…

  1. Active AU Based Patch Weighting for Facial Expression Recognition

    Science.gov (United States)

    Xie, Weicheng; Shen, Linlin; Yang, Meng; Lai, Zhihui

    2017-01-01

    Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU) weighting and patch weight optimization is proposed to represent the specificity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+) databases, respectively. Better cross-database performance has also been observed. PMID:28146094

  2. Active AU Based Patch Weighting for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Weicheng Xie

    2017-01-01

    Full Text Available Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU weighting and patch weight optimization is proposed to represent the specificity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+ databases, respectively. Better cross-database performance has also been observed.

  3. Primary vision and facial emotion recognition in early Parkinson's disease.

    Science.gov (United States)

    Hipp, Géraldine; Diederich, Nico J; Pieria, Vannina; Vaillant, Michel

    2014-03-15

    In early stages of idiopathic Parkinson's disease (IPD), lower order vision (LOV) deficits including reduced colour and contrast discrimination have been consistently reported. Data are less conclusive concerning higher order vision (HOV) deficits, especially for facial emotion recognition (FER). However, a link between both visual levels has been hypothesized. To screen for both levels of visual impairment in early IPD. We prospectively recruited 28 IPD patients with disease duration of 1.4+/-0.8 years and 25 healthy controls. LOV was evaluated by Farnsworth-Munsell 100 Hue Test, Vis-Tech and Pelli-Robson test. HOV was examined by the Ekman 60 Faces Test and part A of the Visual Object and Space recognition test. IPD patients performed worse than controls on almost all LOV tests. The most prominent difference was seen for contrast perception at the lowest spatial frequency (p=0.0002). Concerning FER IPD patients showed reduced recognition of "sadness" (p=0.01). "Fear" perception was correlated with perception of low contrast sensitivity in IPD patients within the lowest performance quartile. Controls showed a much stronger link between "fear" perception" and low contrast detection. At the early IPD stage there are marked deficits of LOV performances, while HOV performances are still intact, with the exception of reduced recognition of "sadness". At this stage, IPD patients seem still to compensate the deficient input of low contrast sensitivity, known to be pivotal for appreciation of negative facial emotions and confirmed as such for healthy controls in this study. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  4. Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions.

    Science.gov (United States)

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio

    2015-01-01

    The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants' tendency to over-attribute anger label to other negative facial expressions. Participants' heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants' performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants' tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children's "pre-existing bias" for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim's perceptive and attentive focus on salient environmental social stimuli.

  5. Facial emotion recognition in bipolar disorder: a critical review.

    Science.gov (United States)

    Rocca, Cristiana Castanho de Almeida; Heuvel, Eveline van den; Caetano, Sheila C; Lafer, Beny

    2009-06-01

    Literature review of the controlled studies in the last 18 years in emotion recognition deficits in bipolar disorder. A bibliographical research of controlled studies with samples larger than 10 participants from 1990 to June 2008 was completed in Medline, Lilacs, PubMed and ISI. Thirty-two papers were evaluated. Euthymic bipolar disorder presented impairment in recognizing disgust and fear. Manic BD showed difficult to recognize fearful and sad faces. Pediatric bipolar disorder patients and children at risk presented impairment in their capacity to recognize emotions in adults and children faces. Bipolar disorder patients were more accurate in recognizing facial emotions than schizophrenic patients. Bipolar disorder patients present impaired recognition of disgust, fear and sadness that can be partially attributed to mood-state. In mania, they have difficult to recognize fear and disgust. Bipolar disorder patients were more accurate in recognizing emotions than depressive and schizophrenic patients. Bipolar disorder children present a tendency to misjudge extreme facial expressions as being moderate or mild in intensity. Affective and cognitive deficits in bipolar disorder vary according to the mood states. Follow-up studies re-testing bipolar disorder patients after recovery are needed in order to investigate if these abnormalities reflect a state or trait marker and can be considered an endophenotype. Future studies should aim at standardizing task and designs.

  6. Facial expression recognition and emotional regulation in narcolepsy with cataplexy.

    Science.gov (United States)

    Bayard, Sophie; Croisier Langenier, Muriel; Dauvilliers, Yves

    2013-04-01

    Cataplexy is pathognomonic of narcolepsy with cataplexy, and defined by a transient loss of muscle tone triggered by strong emotions. Recent researches suggest abnormal amygdala function in narcolepsy with cataplexy. Emotion treatment and emotional regulation strategies are complex functions involving cortical and limbic structures, like the amygdala. As the amygdala has been shown to play a role in facial emotion recognition, we tested the hypothesis that patients with narcolepsy with cataplexy would have impaired recognition of facial emotional expressions compared with patients affected with central hypersomnia without cataplexy and healthy controls. We also aimed to determine whether cataplexy modulates emotional regulation strategies. Emotional intensity, arousal and valence ratings on Ekman faces displaying happiness, surprise, fear, anger, disgust, sadness and neutral expressions of 21 drug-free patients with narcolepsy with cataplexy were compared with 23 drug-free sex-, age- and intellectual level-matched adult patients with hypersomnia without cataplexy and 21 healthy controls. All participants underwent polysomnography recording and multiple sleep latency tests, and completed depression, anxiety and emotional regulation questionnaires. Performance of patients with narcolepsy with cataplexy did not differ from patients with hypersomnia without cataplexy or healthy controls on both intensity rating of each emotion on its prototypical label and mean ratings for valence and arousal. Moreover, patients with narcolepsy with cataplexy did not use different emotional regulation strategies. The level of depressive and anxious symptoms in narcolepsy with cataplexy did not differ from the other groups. Our results demonstrate that narcolepsy with cataplexy accurately perceives and discriminates facial emotions, and regulates emotions normally. The absence of alteration of perceived affective valence remains a major clinical interest in narcolepsy with cataplexy

  7. Facial expression recognition takes longer in the posterior superior temporal sulcus than in the occipital face area.

    Science.gov (United States)

    Pitcher, David

    2014-07-02

    Neuroimaging studies have identified a face-selective region in the right posterior superior temporal sulcus (rpSTS) that responds more strongly during facial expression recognition tasks than during facial identity recognition tasks, but precisely when the rpSTS begins to causally contribute to expression recognition is unclear. The present study addressed this issue using transcranial magnetic stimulation (TMS). In Experiment 1, repetitive TMS delivered over the rpSTS of human participants, at a frequency of 10 Hz for 500 ms, selectively impaired a facial expression task but had no effect on a matched facial identity task. In Experiment 2, participants performed the expression task only while double-pulse TMS (dTMS) was delivered over the rpSTS or over the right occipital face area (rOFA), a face-selective region in lateral occipital cortex, at different latencies up to 210 ms after stimulus onset. Task performance was selectively impaired when dTMS was delivered over the rpSTS at 60-100 ms and 100-140 ms. dTMS delivered over the rOFA impaired task performance at 60-100 ms only. These results demonstrate that the rpSTS causally contributes to expression recognition and that it does so over a longer time-scale than the rOFA. This difference in the length of the TMS induced impairment between the rpSTS and the rOFA suggests that the neural computations that contribute to facial expression recognition in each region are functionally distinct.

  8. A Modified Sparse Representation Method for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2016-01-01

    Full Text Available In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit method is used to speed up the convergence of OMP (orthogonal matching pursuit. Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan’s JAFFE and CMU’s CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.

  9. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  10. The Differential Effects of Thalamus and Basal Ganglia on Facial Emotion Recognition

    Science.gov (United States)

    Cheung, Crystal C. Y.; Lee, Tatia M. C.; Yip, James T. H.; King, Kristin E.; Li, Leonard S. W.

    2006-01-01

    This study examined if subcortical stroke was associated with impaired facial emotion recognition. Furthermore, the lateralization of the impairment and the differential profiles of facial emotion recognition deficits with localized thalamic or basal ganglia damage were also studied. Thirty-eight patients with subcortical strokes and 19 matched…

  11. Fearful faces in schizophrenia - The relationship between patient characteristics and facial affect recognition

    NARCIS (Netherlands)

    van't Wout, Mascha; van Dijke, Annemiek; Aleman, Andre; Kessels, Roy P. C.; Pijpers, Wietske; Kahn, Rene S.

    2007-01-01

    Although schizophrenia has often been associated with deficits in facial affect recognition, it is debated whether the recognition of specific emotions is affected and if these facial affect-processing deficits are related to symptomatology or other patient characteristics. The purpose of the presen

  12. The Differential Effects of Thalamus and Basal Ganglia on Facial Emotion Recognition

    Science.gov (United States)

    Cheung, Crystal C. Y.; Lee, Tatia M. C.; Yip, James T. H.; King, Kristin E.; Li, Leonard S. W.

    2006-01-01

    This study examined if subcortical stroke was associated with impaired facial emotion recognition. Furthermore, the lateralization of the impairment and the differential profiles of facial emotion recognition deficits with localized thalamic or basal ganglia damage were also studied. Thirty-eight patients with subcortical strokes and 19 matched…

  13. Borrowed beauty? Understanding identity in Asian facial cosmetic surgery.

    Science.gov (United States)

    Aquino, Yves Saint James; Steinkamp, Norbert

    2016-09-01

    This review aims to identify (1) sources of knowledge and (2) important themes of the ethical debate related to surgical alteration of facial features in East Asians. This article integrates narrative and systematic review methods. In March 2014, we searched databases including PubMed, Philosopher's Index, Web of Science, Sociological Abstracts, and Communication Abstracts using key terms "cosmetic surgery," "ethnic*," "ethics," "Asia*," and "Western*." The study included all types of papers written in English that discuss the debate on rhinoplasty and blepharoplasty in East Asians. No limit was put on date of publication. Combining both narrative and systematic review methods, a total of 31 articles were critically appraised on their contribution to ethical reflection founded on the debates regarding the surgical alteration of Asian features. Sources of knowledge were drawn from four main disciplines, including the humanities, medicine or surgery, communications, and economics. Focusing on cosmetic surgery perceived as a westernising practice, the key debate themes included authenticity of identity, interpersonal relationships and socio-economic utility in the context of Asian culture. The study shows how cosmetic surgery of ethnic features plays an important role in understanding female identity in the Asian context. Based on the debate themes authenticity of identity, interpersonal relationships, and socio-economic utility, this article argues that identity should be understood as less individualistic and more as relational and transformational in the Asian context. In addition, this article also proposes to consider cosmetic surgery of Asian features as an interplay of cultural imperialism and cultural nationalism, which can both be a source of social pressure to modify one's appearance.

  14. Recognition of static and dynamic facial expressions: Influences of sex, type and intensity of emotion

    OpenAIRE

    2013-01-01

    Ecological validity of static and intense facial expressions in emotional recognition has been questioned. Recent studies have recommended the use of facial stimuli more compatible to the natural conditions of social interaction, which involves motion and variations in emotional intensity. In this study, we compared the recognition of static and dynamic facial expressions of happiness, fear, anger and sadness, presented in four emotional intensities (25 %, 50 %, 75 % and 100 %). Twenty volunt...

  15. Autonomous facial recognition system inspired by human visual system based logarithmical image visualization technique

    Science.gov (United States)

    Wan, Qianwen; Panetta, Karen; Agaian, Sos

    2017-05-01

    Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.

  16. ALTERED KINEMATICS OF FACIAL EMOTION EXPRESSION AND EMOTION RECOGNITION DEFICITS ARE UNRELATED IN PARKINSON'S DISEASE

    Directory of Open Access Journals (Sweden)

    Matteo Bologna

    2016-12-01

    Full Text Available Background: Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson’s disease (PD. However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. Objective: To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Methods: Eighteen patients with PD and 16 healthy controls were enrolled in the study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analysed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores and clinical and demographic data in patients were evaluated using the Spearman’s test and multiple regression analysis.Results: The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all Ps0.05. Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores and clinical and demographic data in patients (all Ps>0.05.Conclusion: The present results provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.

  17. Neurobiological mechanisms associated with facial affect recognition deficits after traumatic brain injury.

    Science.gov (United States)

    Neumann, Dawn; McDonald, Brenna C; West, John; Keiski, Michelle A; Wang, Yang

    2016-06-01

    The neurobiological mechanisms that underlie facial affect recognition deficits after traumatic brain injury (TBI) have not yet been identified. Using functional magnetic resonance imaging (fMRI), study aims were to 1) determine if there are differences in brain activation during facial affect processing in people with TBI who have facial affect recognition impairments (TBI-I) relative to people with TBI and healthy controls who do not have facial affect recognition impairments (TBI-N and HC, respectively); and 2) identify relationships between neural activity and facial affect recognition performance. A facial affect recognition screening task performed outside the scanner was used to determine group classification; TBI patients who performed greater than one standard deviation below normal performance scores were classified as TBI-I, while TBI patients with normal scores were classified as TBI-N. An fMRI facial recognition paradigm was then performed within the 3T environment. Results from 35 participants are reported (TBI-I = 11, TBI-N = 12, and HC = 12). For the fMRI task, TBI-I and TBI-N groups scored significantly lower than the HC group. Blood oxygenation level-dependent (BOLD) signals for facial affect recognition compared to a baseline condition of viewing a scrambled face, revealed lower neural activation in the right fusiform gyrus (FG) in the TBI-I group than the HC group. Right fusiform gyrus activity correlated with accuracy on the facial affect recognition tasks (both within and outside the scanner). Decreased FG activity suggests facial affect recognition deficits after TBI may be the result of impaired holistic face processing. Future directions and clinical implications are discussed.

  18. Ventromedial prefrontal cortex mediates visual attention during facial emotion recognition.

    Science.gov (United States)

    Wolf, Richard C; Philippi, Carissa L; Motzkin, Julian C; Baskaya, Mustafa K; Koenigs, Michael

    2014-06-01

    The ventromedial prefrontal cortex is known to play a crucial role in regulating human social and emotional behaviour, yet the precise mechanisms by which it subserves this broad function remain unclear. Whereas previous neuropsychological studies have largely focused on the role of the ventromedial prefrontal cortex in higher-order deliberative processes related to valuation and decision-making, here we test whether ventromedial prefrontal cortex may also be critical for more basic aspects of orienting attention to socially and emotionally meaningful stimuli. Using eye tracking during a test of facial emotion recognition in a sample of lesion patients, we show that bilateral ventromedial prefrontal cortex damage impairs visual attention to the eye regions of faces, particularly for fearful faces. This finding demonstrates a heretofore unrecognized function of the ventromedial prefrontal cortex-the basic attentional process of controlling eye movements to faces expressing emotion.

  19. Face to face: blocking facial mimicry can selectively impair recognition of emotional expressions.

    Science.gov (United States)

    Oberman, Lindsay M; Winkielman, Piotr; Ramachandran, Vilayanur S

    2007-01-01

    People spontaneously mimic a variety of behaviors, including emotional facial expressions. Embodied cognition theories suggest that mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. If so, blocking facial mimicry should impair recognition of expressions, especially of emotions that are simulated using facial musculature. The current research tested this hypothesis using four expressions (happy, disgust, fear, and sad) and two mimicry-interfering manipulations (1) biting on a pen and (2) chewing gum, as well as two control conditions. Experiment 1 used electromyography over cheek, mouth, and nose regions. The bite manipulation consistently activated assessed muscles, whereas the chew manipulation activated muscles only intermittently. Further, expressing happiness generated most facial action. Experiment 2 found that the bite manipulation interfered most with recognition of happiness. These findings suggest that facial mimicry differentially contributes to recognition of specific facial expressions, thus allowing for more refined predictions from embodied cognition theories.

  20. Dopamine and light: effects on facial emotion recognition.

    Science.gov (United States)

    Cawley, Elizabeth; Tippler, Maria; Coupland, Nicholas J; Benkelfat, Chawki; Boivin, Diane B; Aan Het Rot, Marije; Leyton, Marco

    2017-06-01

    Bright light can affect mood states and social behaviours. Here, we tested potential interacting effects of light and dopamine on facial emotion recognition. Participants were 32 women with subsyndromal seasonal affective disorder tested in either a bright (3000 lux) or dim light (10 lux) environment. Each participant completed two test days, one following the ingestion of a phenylalanine/tyrosine-deficient mixture and one with a nutritionally balanced control mixture, both administered double blind in a randomised order. Approximately four hours post-ingestion participants completed a self-report measure of mood followed by a facial emotion recognition task. All testing took place between November and March when seasonal symptoms would be present. Following acute phenylalanine/tyrosine depletion (APTD), compared to the nutritionally balanced control mixture, participants in the dim light condition were more accurate at recognising sad faces, less likely to misclassify them, and faster at responding to them, effects that were independent of changes in mood. Effects of APTD on responses to sad faces in the bright light group were less consistent. There were no APTD effects on responses to other emotions, with one exception: a significant light × mixture interaction was seen for the reaction time to fear, but the pattern of effect was not predicted a priori or seen on other measures. Together, the results suggest that the processing of sad emotional stimuli might be greater when dopamine transmission is low. Bright light exposure, used for the treatment of both seasonal and non-seasonal mood disorders, might produce some of its benefits by preventing this effect.

  1. Facial emotion recognition is inversely correlated with tremor severity in essential tremor.

    Science.gov (United States)

    Auzou, Nicolas; Foubert-Samier, Alexandra; Dupouy, Sandrine; Meissner, Wassilios G

    2014-04-01

    We here assess limbic and orbitofrontal control in 20 patients with essential tremor (ET) and 18 age-matched healthy controls using the Ekman Facial Emotion Recognition Task and the IOWA Gambling Task. Our results show an inverse relation between facial emotion recognition and tremor severity. ET patients also showed worse performance in joy and fear recognition, as well as subtle abnormalities in risk detection, but these differences did not reach significance after correction for multiple testing.

  2. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    Science.gov (United States)

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  3. INTEGRATED EXPRESSIONAL AND COLOR INVARIANT FACIAL RECOGNITION SCHEME FOR HUMAN BIOMETRIC SYSTEM

    Directory of Open Access Journals (Sweden)

    M.Punithavalli

    2013-09-01

    Full Text Available In many practical applications like biometrics, video surveillance and human computer interaction, face recognition plays a major role. The previous works focused on recognizing and enhancing the biometric systems based on the facial components of the system. In this work, we are going to build Integrated Expressional and Color Invariant Facial Recognition scheme for human biometric recognition suited to different security provisioning public participation areas.At first, the features of the face are identified and processed using bayes classifier with RGB and HSV color bands. Second, psychological emotional variance are identified and linked with the respective human facial expression based on the facial action code system. Finally, an integrated expressional and color invariant facial recognition is proposed for varied conditions of illumination, pose, transformation, etc. These conditions on color invariant model are suited to easy and more efficient biometric recognition system in public domain and high confidential security zones. The integration is made derived genetic operation on the color and expression components of the facial feature system. Experimental evaluation is planned to done with public face databases (DBs such as CMU-PIE, Color FERET, XM2VTSDB, SCface, and FRGC 2.0 to estimate the performance of the proposed integrated expressional facial and color invariant recognition scheme [IEFCIRS]. Performance evaluation is done based on the constraints like recognition rate, security and evalaution time.

  4. [Association between intelligence development and facial expression recognition ability in children with autism spectrum disorder].

    Science.gov (United States)

    Pan, Ning; Wu, Gui-Hua; Zhang, Ling; Zhao, Ya-Fen; Guan, Han; Xu, Cai-Juan; Jing, Jin; Jin, Yu

    2017-03-01

    To investigate the features of intelligence development, facial expression recognition ability, and the association between them in children with autism spectrum disorder (ASD). A total of 27 ASD children aged 6-16 years (ASD group, full intelligence quotient >70) and age- and gender-matched normally developed children (control group) were enrolled. Wechsler Intelligence Scale for Children Fourth Edition and Chinese Static Facial Expression Photos were used for intelligence evaluation and facial expression recognition test. Compared with the control group, the ASD group had significantly lower scores of full intelligence quotient, verbal comprehension index, perceptual reasoning index (PRI), processing speed index(PSI), and working memory index (WMI) (Pchildren have delayed intelligence development compared with normally developed children and impaired expression recognition ability. Perceptual reasoning and working memory abilities are positively correlated with expression recognition ability, which suggests that insufficient perceptual reasoning and working memory abilities may be important factors affecting facial expression recognition ability in ASD children.

  5. [Recognition of facial emotions and theory of mind in schizophrenia: could the theory of mind deficit be due to the non-recognition of facial emotions?].

    Science.gov (United States)

    Besche-Richard, C; Bourrin-Tisseron, A; Olivier, M; Cuervo-Lombard, C-V; Limosin, F

    2012-06-01

    The deficits of recognition of facial emotions and attribution of mental states are now well-documented in schizophrenic patients. However, we don't clearly know about the link between these two complex cognitive functions, especially in schizophrenia. In this study, we attempted to test the link between the recognition of facial emotions and the capacities of mentalization, notably the attribution of beliefs, in health and schizophrenic participants. We supposed that the level of performance of recognition of facial emotions, compared to the working memory and executive functioning, was the best predictor of the capacities to attribute a belief. Twenty schizophrenic participants according to DSM-IVTR (mean age: 35.9 years, S.D. 9.07; mean education level: 11.15 years, S.D. 2.58) clinically stabilized, receiving neuroleptic or antipsychotic medication participated in the study. They were matched on age (mean age: 36.3 years, S.D. 10.9) and educational level (mean educational level: 12.10, S.D. 2.25) with 30 matched healthy participants. All the participants were evaluated with a pool of tasks testing the recognition of facial emotions (the faces of Baron-Cohen), the attribution of beliefs (two stories of first order and two stories of second order), the working memory (the digit span of the WAIS-III and the Corsi test) and the executive functioning (Trail Making Test A et B, Wisconsin Card Sorting Test brief version). Comparing schizophrenic and healthy participants, our results confirmed a difference between the performances of the recognition of facial emotions and those of the attribution of beliefs. The result of the simple linear regression showed that the recognition of facial emotions, compared to the performances of working memory and executive functioning, was the best predictor of the performances in the theory of mind stories. Our results confirmed, in a sample of schizophrenic patients, the deficits in the recognition of facial emotions and in the

  6. Facial Emotion Recognition by Persons with Mental Retardation: A Review of the Experimental Literature.

    Science.gov (United States)

    Rojahn, Johannes; And Others

    1995-01-01

    This literature review discusses 21 studies on facial emotion recognition by persons with mental retardation in terms of methodological characteristics, stimulus material, salient variables and their relation to recognition tasks, and emotion recognition deficits in mental retardation. A table provides comparative data on all 21 studies. (DB)

  7. Media identities and media-influenced indentifications Visibility and identity recognition in the media

    Directory of Open Access Journals (Sweden)

    Víctor Fco. Sampedro Blanco

    2004-10-01

    Full Text Available The media establish, in large part, the patterns of visibility and public recognition of collective identities. We define media identities as those that are the object of production and diffusion by the media. From this discourse, the communities and individuals elaborate media-influenced identifications; that is, processes of recognition or banishment; (rearticulating the identity markers that the media offer with other cognitive and emotional sources. The generation and appropriation of the identities are subjected to a media hierarchisation that influences their normalisation or marginalisation. The identities presented by the media and assumed by the audience as part of the official, hegemonic discourse are normalised, whereas the identities and identifications formulated in popular and minority terms are marginalised. After presenting this conceptual and analytical framework, this study attempts to outline the logics that condition the presentation, on the one hand, andthe public recognition, on the other hand, of contemporary identities.

  8. Facial Emotion and Identity Processing Development in 5- to 15-Year-Old Children

    OpenAIRE

    Johnston, Patrick J.; Kaufman, Jordy; Bajic, Julie; Sercombe, Alicia; Michie, Patricia T.; Karayanidis, Frini

    2011-01-01

    Most developmental studies of emotional face processing to date have focused on infants and very young children. Additionally, studies that examine emotional face processing in older children do not distinguish development in emotion and identity face processing from more generic age-related cognitive improvement. In this study, we developed a paradigm that measures processing of facial expression in comparison to facial identity and complex visual stimuli. The three matching tasks were devel...

  9. Facial Emotion and Identity Processing Development in 5- to 15-Year-Old Children

    OpenAIRE

    Patrick eJohnston; Jordy eKaufman; Julie eBajic; Alicia eSercombe; Patricia eMichie; Frini eKarayanidis

    2011-01-01

    Most developmental studies of emotional face processing to date have focussed on infants and very young children. Additionally, studies that examine emotional face processing in older children do not distinguish development in emotion and identity face processing from more generic age-related cognitive improvement. In this study, we developed a paradigm that measures processing of facial expression in comparison to facial identity and complex visual stimuli. The three matching tasks were de...

  10. Facial emotion recognition in myotonic dystrophy type 1 correlates with CTG repeat expansion

    Directory of Open Access Journals (Sweden)

    Stefan Winblad

    2009-04-01

    Full Text Available We investigated the ability of patients with myotonic dystrophy type 1 to recognise basic facial emotions. We also explored the relationship between facial emotion recognition, neuropsychological data, personality, and CTG repeat expansion data in the DM-1 group. In total, 50 patients with DM-1 (28 women and 22 men participated, with 41 healthy controls. Recognition of facial emotional expressions was assessed using photographs of basic emotions. A set of tests measured cognition and personality dimensions, and CTG repeat size was quantified in blood lymphocytes. Patients with DM-1 showed impaired recognition of facial emotions compared with controls. A significant negative correlation was found between total score of emotion recognition in a forced choice task and CTG repeat size. Furthermore, specific cognitive functions (vocabulary, visuospatial construction ability, and speed and personality dimensions (reward dependence and cooperativeness correlated with scores on the forced choice emotion recognition task.These findings revealed a CTG repeat dependent facial emotion recognition deficit in the DM-1 group, which was associated with specific neuropsychological functions. Furthermore, a correlation was found between facial emotional recognition ability and personality dimensions associated with sociability. This adds a new clinically relevant dimension in the cognitive deficits associated with DM-1.

  11. Facial emotion recognition impairments are associated with brain volume abnormalities in individuals with HIV.

    Science.gov (United States)

    Clark, Uraina S; Walker, Keenan A; Cohen, Ronald A; Devlin, Kathryn N; Folkers, Anna M; Pina, Matthew J; Tashima, Karen T

    2015-04-01

    Impaired facial emotion recognition abilities in HIV+ patients are well documented, but little is known about the neural etiology of these difficulties. We examined the relation of facial emotion recognition abilities to regional brain volumes in 44 HIV-positive (HIV+) and 44 HIV-negative control (HC) adults. Volumes of structures implicated in HIV-associated neuropathology and emotion recognition were measured on MRI using an automated segmentation tool. Relative to HC, HIV+ patients demonstrated emotion recognition impairments for fearful expressions, reduced anterior cingulate cortex (ACC) volumes, and increased amygdala volumes. In the HIV+ group, fear recognition impairments correlated significantly with ACC, but not amygdala volumes. ACC reductions were also associated with lower nadir CD4 levels (i.e., greater HIV-disease severity). These findings extend our understanding of the neurobiological substrates underlying an essential social function, facial emotion recognition, in HIV+ individuals and implicate HIV-related ACC atrophy in the impairment of these abilities.

  12. Facial Emotion Recognition Impairments are Associated with Brain Volume Abnormalities in Individuals with HIV

    Science.gov (United States)

    Clark, Uraina S.; Walker, Keenan A.; Cohen, Ronald A.; Devlin, Kathryn N.; Folkers, Anna M.; Pina, Mathew M.; Tashima, Karen T.

    2015-01-01

    Impaired facial emotion recognition abilities in HIV+ patients are well documented, but little is known about the neural etiology of these difficulties. We examined the relation of facial emotion recognition abilities to regional brain volumes in 44 HIV-positive (HIV+) and 44 HIV-negative control (HC) adults. Volumes of structures implicated in HIV− associated neuropathology and emotion recognition were measured on MRI using an automated segmentation tool. Relative to HC, HIV+ patients demonstrated emotion recognition impairments for fearful expressions, reduced anterior cingulate cortex (ACC) volumes, and increased amygdala volumes. In the HIV+ group, fear recognition impairments correlated significantly with ACC, but not amygdala volumes. ACC reductions were also associated with lower nadir CD4 levels (i.e., greater HIV-disease severity). These findings extend our understanding of the neurobiological substrates underlying an essential social function, facial emotion recognition, in HIV+ individuals and implicate HIV-related ACC atrophy in the impairment of these abilities. PMID:25744868

  13. On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

    Science.gov (United States)

    Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.

    Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

  14. Altered Kinematics of Facial Emotion Expression and Emotion Recognition Deficits Are Unrelated in Parkinson's Disease.

    Science.gov (United States)

    Bologna, Matteo; Berardelli, Isabella; Paparella, Giulia; Marsili, Luca; Ricciardi, Lucia; Fabbrini, Giovanni; Berardelli, Alfredo

    2016-01-01

    Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson's disease (PD). However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Eighteen patients with PD and 16 healthy controls were enrolled in this study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analyzed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients were evaluated using the Spearman's test and multiple regression analysis. The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all Ps Ekman global score and disgust, sadness, and fear sub-scores than healthy controls (all Ps emotion recognition deficits were unrelated in patients (all Ps > 0.05). Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients (all Ps > 0.05). The results in this study provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.

  15. Close Range Photogrammetry and Neural Network for Facial Recognition

    Directory of Open Access Journals (Sweden)

    Rami Al-Ruzouq

    2012-01-01

    Full Text Available Recently, there has been an increasing interest in utilizing imagery in different fields such as archaeology, architecture, mechanical inspection and biometric identifiers where face recognition considered as one of the most important physiological characteristics that is related to the shape and geometry of the faces and used for identification and verification of a person's identity. In this study, close range photogrammetry with overlapping photographs were used to create a three dimensional model of human face where coordinates of selected object points were exatrcted and used to caculate five different geometric quantities that been used as biometric authentication for uniquely recognizing humans. Then , the probabilistic neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, utilize the extracted geometric quantities to find patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. Quantifiable dimensions that based on geometric attributes rather than radiometric characteristics has been successfully extracted using close range photogrammetry. the Probabilistic Neural Network (PNN as a kind from radial basis network group has been used to specify a geometrics parameters for face recognition where the designed recognition method is not effected by face gesture or color and has lower cost compared with other techniques. This method is reliable and flexible with respect to the level of detail that describe the human surface. Experimental results using real data proved the feasibility and the quality of the suggested approach.

  16. Fully Automatic Recognition of the Temporal Phases of Facial Actions

    NARCIS (Netherlands)

    Valstar, M.F.; Pantic, Maja

    Past work on automatic analysis of facial expressions has focused mostly on detecting prototypic expressions of basic emotions like happiness and anger. The method proposed here enables the detection of a much larger range of facial behavior by recognizing facial muscle actions [action units (AUs)

  17. A small-world network model of facial emotion recognition.

    Science.gov (United States)

    Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto

    2016-01-01

    Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.

  18. Recognition of Facial Expressions and Prosodic Cues with Graded Emotional Intensities in Adults with Asperger Syndrome

    Science.gov (United States)

    Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-01-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…

  19. Assessing the Utility of a Virtual Environment for Enhancing Facial Affect Recognition in Adolescents with Autism

    Science.gov (United States)

    Bekele, Esubalew; Crittendon, Julie; Zheng, Zhi; Swanson, Amy; Weitlauf, Amy; Warren, Zachary; Sarkar, Nilanjan

    2014-01-01

    Teenagers with autism spectrum disorder (ASD) and age-matched controls participated in a dynamic facial affect recognition task within a virtual reality (VR) environment. Participants identified the emotion of a facial expression displayed at varied levels of intensity by a computer generated avatar. The system assessed performance (i.e.,…

  20. The Relation of Facial Affect Recognition and Empathy to Delinquency in Youth Offenders

    Science.gov (United States)

    Carr, Mary B.; Lutjemeier, John A.

    2005-01-01

    Associations among facial affect recognition, empathy, and self-reported delinquency were studied in a sample of 29 male youth offenders at a probation placement facility. Youth offenders were asked to recognize facial expressions of emotions from adult faces, child faces, and cartoon faces. Youth offenders also responded to a series of statements…

  1. Facial Action Unit Recognition using Temporal Templates and Particle Filtering with Factorized Likelihoods

    NARCIS (Netherlands)

    Valstar, Michel; Pantic, Maja; Patras, Ioannis

    2005-01-01

    Automatic recognition of human facial expressions is a challenging problem with many applications in human-computer interaction. Most of the existing facial expression analyzers succeed only in recognizing a few basic emotions, such as anger or happiness. In contrast, the system we wish to demonstra

  2. Shared Gaussian Process Latent Variable Model for Multi-view Facial Expression Recognition

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2013-01-01

    Facial-expression data often appear in multiple views either due to head-movements or the camera position. Existing methods for multi-view facial expression recognition perform classification of the target expressions either by using classifiers learned separately for each view or by using a single

  3. Multi-output Laplacian Dynamic Ordinal Regression for Facial Expression Recognition and Intensity Estimation

    NARCIS (Netherlands)

    Rudovic, Ognjen; Pavlovic, Vladimir; Pantic, Maja

    2012-01-01

    Automated facial expression recognition has received increased attention over the past two decades. Existing works in the field usually do not encode either the temporal evolution or the intensity of the observed facial displays. They also fail to jointly model multidimensional (multi-class) continu

  4. Spatiotemporal Analysis of RGB-D-T Facial Images for Multimodal Pain Level Recognition

    DEFF Research Database (Denmark)

    Irani, Ramin; Nasrollahi, Kamal; Oliu Simon, Marc

    2015-01-01

    facial images for pain detection and pain intensity level recognition. For this purpose, we extract energies released by facial pixels using a spatiotemporal filter. Experiments on a group of 12 elderly people applying the multimodal approach show that the proposed method successfully detects pain...

  5. Predicting the Accuracy of Facial Affect Recognition: The Interaction of Child Maltreatment and Intellectual Functioning

    Science.gov (United States)

    Shenk, Chad E.; Putnam, Frank W.; Noll, Jennie G.

    2013-01-01

    Previous research demonstrates that both child maltreatment and intellectual performance contribute uniquely to the accurate identification of facial affect by children and adolescents. The purpose of this study was to extend this research by examining whether child maltreatment affects the accuracy of facial recognition differently at varying…

  6. Multi-output Laplacian Dynamic Ordinal Regression for Facial Expression Recognition and Intensity Estimation

    NARCIS (Netherlands)

    Rudovic, Ognjen; Pavlovic, Vladimir; Pantic, Maja

    2012-01-01

    Automated facial expression recognition has received increased attention over the past two decades. Existing works in the field usually do not encode either the temporal evolution or the intensity of the observed facial displays. They also fail to jointly model multidimensional (multi-class)

  7. The Relation of Facial Affect Recognition and Empathy to Delinquency in Youth Offenders

    Science.gov (United States)

    Carr, Mary B.; Lutjemeier, John A.

    2005-01-01

    Associations among facial affect recognition, empathy, and self-reported delinquency were studied in a sample of 29 male youth offenders at a probation placement facility. Youth offenders were asked to recognize facial expressions of emotions from adult faces, child faces, and cartoon faces. Youth offenders also responded to a series of statements…

  8. Discriminative shared Gaussian processes for multi-view and view-invariant facial expression recognition

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers

  9. Assessing the Utility of a Virtual Environment for Enhancing Facial Affect Recognition in Adolescents with Autism

    Science.gov (United States)

    Bekele, Esubalew; Crittendon, Julie; Zheng, Zhi; Swanson, Amy; Weitlauf, Amy; Warren, Zachary; Sarkar, Nilanjan

    2014-01-01

    Teenagers with autism spectrum disorder (ASD) and age-matched controls participated in a dynamic facial affect recognition task within a virtual reality (VR) environment. Participants identified the emotion of a facial expression displayed at varied levels of intensity by a computer generated avatar. The system assessed performance (i.e.,…

  10. The Change in Facial Emotion Recognition Ability in Inpatients with Treatment Resistant Schizophrenia After Electroconvulsive Therapy.

    Science.gov (United States)

    Dalkıran, Mihriban; Tasdemir, Akif; Salihoglu, Tamer; Emul, Murat; Duran, Alaattin; Ugur, Mufit; Yavuz, Ruhi

    2017-09-01

    People with schizophrenia have impairments in emotion recognition along with other social cognitive deficits. In the current study, we aimed to investigate the immediate benefits of ECT on facial emotion recognition ability. Thirty-two treatment resistant patients with schizophrenia who have been indicated for ECT enrolled in the study. Facial emotion stimuli were a set of 56 photographs that depicted seven basic emotions: sadness, anger, happiness, disgust, surprise, fear, and neutral faces. The average age of the participants was 33.4 ± 10.5 years. The rate of recognizing the disgusted facial expression increased significantly after ECT (p changes were found in the rest of the facial expressions (p > 0.05). After the ECT, the time period of responding to the fear and happy facial expressions were significantly shorter (p < 0.05). Facial emotion recognition ability is an important social cognitive skill for social harmony, proper relation and living independently. At least, the ECT sessions do not seem to affect facial emotion recognition ability negatively and seem to improve identifying disgusted facial emotion which is related with dopamine enriched regions in brain.

  11. Recognition of Facial Expressions and Prosodic Cues with Graded Emotional Intensities in Adults with Asperger Syndrome

    Science.gov (United States)

    Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-01-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…

  12. Discriminative shared Gaussian processes for multi-view and view-invariant facial expression recognition

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2015-01-01

    Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers

  13. Facial emotion recognition in psychiatrists and influences of their therapeutic identification on that ability.

    Science.gov (United States)

    Dalkıran, Mihriban; Gultekin, Gozde; Yuksek, Erhan; Varsak, Nalan; Gul, Hesna; Kıncır, Zeliha; Tasdemir, Akif; Emul, Murat

    2016-08-01

    Although emotional cues like facial emotion expressions seem to be important in social interaction, there is no specific training about emotional cues for psychiatrists. Here, we aimed to investigate psychiatrists' ability of facial emotion recognition and relation with their clinical identification as psychotherapy-psychopharmacology oriented or being adult and childhood-adolescent psychiatrist. Facial Emotion Recognition Test was performed to 130 psychiatrists that were constructed by a set of photographs (happy, sad, fearful, angry, surprised, disgusted and neutral faces) from Ekman and Friesen's. Psychotherapy oriented adult psychiatrists were significantly better in recognizing sad facial emotion (p=.003) than psychopharmacologists while no significant differences were detected according to therapeutic orientation among child-adolescent psychiatrists (for each, p>.05). Adult psychiatrists were significantly better in recognizing fearful (p=.012) and disgusted (p=.003) facial emotions than child-adolescent psychiatrists while the latter were better in recognizing angry facial emotion (p=.008). For the first time, we have shown some differences on psychiatrists' facial emotion recognition ability according to therapeutic identification and being adult or child-adolescent psychiatrist. It would be valuable to investigate how these differences or training the ability of facial emotion recognition would affect the quality of patient-clinician interaction and treatment related outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Predicting the Accuracy of Facial Affect Recognition: The Interaction of Child Maltreatment and Intellectual Functioning

    Science.gov (United States)

    Shenk, Chad E.; Putnam, Frank W.; Noll, Jennie G.

    2013-01-01

    Previous research demonstrates that both child maltreatment and intellectual performance contribute uniquely to the accurate identification of facial affect by children and adolescents. The purpose of this study was to extend this research by examining whether child maltreatment affects the accuracy of facial recognition differently at varying…

  15. Comparison of Spectral-Only and Spectral/Spatial Face Recognition for Personal Identity Verification

    Directory of Open Access Journals (Sweden)

    Zhihong Pan

    2009-01-01

    Full Text Available Face recognition based on spatial features has been widely used for personal identity verification for security-related applications. Recently, near-infrared spectral reflectance properties of local facial regions have been shown to be sufficient discriminants for accurate face recognition. In this paper, we compare the performance of the spectral method with face recognition using the eigenface method on single-band images extracted from the same hyperspectral image set. We also consider methods that use multiple original and PCA-transformed bands. Lastly, an innovative spectral eigenface method which uses both spatial and spectral features is proposed to improve the quality of the spectral features and to reduce the expense of the computation. The algorithms are compared using a consistent framework.

  16. Comparison of Spectral-Only and Spectral/Spatial Face Recognition for Personal Identity Verification

    Science.gov (United States)

    Pan, Zhihong; Healey, Glenn; Tromberg, Bruce

    2009-12-01

    Face recognition based on spatial features has been widely used for personal identity verification for security-related applications. Recently, near-infrared spectral reflectance properties of local facial regions have been shown to be sufficient discriminants for accurate face recognition. In this paper, we compare the performance of the spectral method with face recognition using the eigenface method on single-band images extracted from the same hyperspectral image set. We also consider methods that use multiple original and PCA-transformed bands. Lastly, an innovative spectral eigenface method which uses both spatial and spectral features is proposed to improve the quality of the spectral features and to reduce the expense of the computation. The algorithms are compared using a consistent framework.

  17. More Pronounced Deficits in Facial Emotion Recognition for Schizophrenia than Bipolar Disorder

    Science.gov (United States)

    Goghari, Vina M; Sponheim, Scott R

    2012-01-01

    Schizophrenia and bipolar disorder are typically separated in diagnostic systems. Behavioural, cognitive, and brain abnormalities associated with each disorder nonetheless overlap. We evaluated the diagnostic specificity of facial emotion recognition deficits in schizophrenia and bipolar disorder to determine whether select aspects of emotion recognition differed for the two disorders. The investigation used an experimental task that included the same facial images in an emotion recognition condition and an age recognition condition (to control for processes associated with general face recognition) in 27 schizophrenia patients, 16 bipolar I patients, and 30 controls. Schizophrenia and bipolar patients exhibited both shared and distinct aspects of facial emotion recognition deficits. Schizophrenia patients had deficits in recognizing angry facial expressions compared to healthy controls and bipolar patients. Compared to control participants, both schizophrenia and bipolar patients were more likely to mislabel facial expressions of anger as fear. Given that schizophrenia patients exhibited a deficit in emotion recognition for angry faces, which did not appear due to generalized perceptual and cognitive dysfunction, improving recognition of threat-related expression may be an important intervention target to improve social functioning in schizophrenia. PMID:23218816

  18. Developmental differences in holistic interference of facial part recognition.

    Directory of Open Access Journals (Sweden)

    Kazuyo Nakabayashi

    Full Text Available Research has shown that adults' recognition of a facial part can be disrupted if the part is learnt without a face context but tested in a whole face. This has been interpreted as the holistic interference effect. The present study investigated whether children of 6- and 9-10-year-olds would show a similar effect. Participants were asked to judge whether a probe part was the same as or different from a test part whereby the part was presented either in isolation or in a whole face. The results showed that while all the groups were susceptible to a holistic interference, the youngest group was most severely affected. Contrary to the view that piecemeal processing precedes holistic processing in the cognitive development, our findings demonstrate that holistic processing is already present at 6 years of age. It is the ability to inhibit the influence of holistic information on piecemeal processing that seems to require a longer period of development into at an older and adult age.

  19. Developmental Differences in Holistic Interference of Facial Part Recognition

    Science.gov (United States)

    Nakabayashi, Kazuyo; Liu, Chang Hong

    2013-01-01

    Research has shown that adults’ recognition of a facial part can be disrupted if the part is learnt without a face context but tested in a whole face. This has been interpreted as the holistic interference effect. The present study investigated whether children of 6- and 9–10-year-olds would show a similar effect. Participants were asked to judge whether a probe part was the same as or different from a test part whereby the part was presented either in isolation or in a whole face. The results showed that while all the groups were susceptible to a holistic interference, the youngest group was most severely affected. Contrary to the view that piecemeal processing precedes holistic processing in the cognitive development, our findings demonstrate that holistic processing is already present at 6 years of age. It is the ability to inhibit the influence of holistic information on piecemeal processing that seems to require a longer period of development into at an older and adult age. PMID:24204847

  20. Facial emotion recognition ability: psychiatry nurses versus nurses from other departments.

    Science.gov (United States)

    Gultekin, Gozde; Kincir, Zeliha; Kurt, Merve; Catal, Yasir; Acil, Asli; Aydin, Aybike; Özcan, Mualla; Delikkaya, Busra N; Kacar, Selma; Emul, Murat

    2016-12-01

    Facial emotion recognition is a basic element in non-verbal communication. Although some researchers have shown that recognizing facial expressions may be important in the interaction between doctors and patients, there are no studies concerning facial emotion recognition in nurses. Here, we aimed to investigate facial emotion recognition ability in nurses and compare the abilities between nurses from psychiatry and other departments. In this cross-sectional study, sixty seven nurses were divided into two groups according to their departments: psychiatry (n=31); and, other departments (n=36). A Facial Emotion Recognition Test, constructed from a set of photographs from Ekman and Friesen's book "Pictures of Facial Affect", was administered to all participants. In whole group, the highest mean accuracy rate of recognizing facial emotion was the happy (99.14%) while the lowest accurately recognized facial expression was fear (47.71%). There were no significant differences between two groups among mean accuracy rates in recognizing happy, sad, fear, angry, surprised facial emotion expressions (for all, p>0.05). The ability of recognizing disgusted and neutral facial emotions tended to be better in other nurses than psychiatry nurses (p=0.052 and p=0.053, respectively) Conclusion: This study was the first that revealed indifference in the ability of FER between psychiatry nurses and non-psychiatry nurses. In medical education curricula throughout the world, no specific training program is scheduled for recognizing emotional cues of patients. We considered that improving the ability of recognizing facial emotion expression in medical stuff might be beneficial in reducing inappropriate patient-medical stuff interaction.

  1. Support vector machine-based facial-expression recognition method combining shape and appearance

    Science.gov (United States)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  2. Facial Affect Recognition Training Through Telepractice: Two Case Studies of Individuals with Chronic Traumatic Brain Injury.

    Science.gov (United States)

    Williamson, John; Isaki, Emi

    2015-01-01

    The use of a modified Facial Affect Recognition (FAR) training to identify emotions was investigated with two case studies of adults with moderate to severe chronic (> five years) traumatic brain injury (TBI). The modified FAR training was administered via telepractice to target social communication skills. Therapy consisted of identifying emotions through static facial expressions, personally reflecting on those emotions, and identifying sarcasm and emotions within social stories and role-play. Pre- and post-therapy measures included static facial photos to identify emotion and the Prutting and Kirchner Pragmatic Protocol for social communication. Both participants with chronic TBI showed gains on identifying facial emotions on the static photos.

  3. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    Science.gov (United States)

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions.

  4. Novel dynamic Bayesian networks for facial action element recognition and understanding

    Science.gov (United States)

    Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong

    2011-12-01

    In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.

  5. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    Science.gov (United States)

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding.

  6. Temporal Lobe Structures and Facial Emotion Recognition in Schizophrenia Patients and Nonpsychotic Relatives

    Science.gov (United States)

    Goghari, Vina M.; MacDonald, Angus W.; Sponheim, Scott R.

    2011-01-01

    Temporal lobe abnormalities and emotion recognition deficits are prominent features of schizophrenia and appear related to the diathesis of the disorder. This study investigated whether temporal lobe structural abnormalities were associated with facial emotion recognition deficits in schizophrenia and related to genetic liability for the disorder. Twenty-seven schizophrenia patients, 23 biological family members, and 36 controls participated. Several temporal lobe regions (fusiform, superior temporal, middle temporal, amygdala, and hippocampus) previously associated with face recognition in normative samples and found to be abnormal in schizophrenia were evaluated using volumetric analyses. Participants completed a facial emotion recognition task and an age recognition control task under time-limited and self-paced conditions. Temporal lobe volumes were tested for associations with task performance. Group status explained 23% of the variance in temporal lobe volume. Left fusiform gray matter volume was decreased by 11% in patients and 7% in relatives compared with controls. Schizophrenia patients additionally exhibited smaller hippocampal and middle temporal volumes. Patients were unable to improve facial emotion recognition performance with unlimited time to make a judgment but were able to improve age recognition performance. Patients additionally showed a relationship between reduced temporal lobe gray matter and poor facial emotion recognition. For the middle temporal lobe region, the relationship between greater volume and better task performance was specific to facial emotion recognition and not age recognition. Because schizophrenia patients exhibited a specific deficit in emotion recognition not attributable to a generalized impairment in face perception, impaired emotion recognition may serve as a target for interventions. PMID:20484523

  7. Using Kinect for real-time emotion recognition via facial expressions

    Institute of Scientific and Technical Information of China (English)

    Qi-rong MAO; Xin-yu PAN; Yong-zhao ZHAN; Xiang-jun SHEN

    2015-01-01

    Emotion recognition via facial expressions (ERFE) has attracted a great deal of interest with recent advances in artificial intelligence and pattern recognition. Most studies are based on 2D images, and their performance is usually computationally expensive. In this paper, we propose a real-time emotion recognition approach based on both 2D and 3D facial expression features captured by Kinect sensors. To capture the deformation of the 3D mesh during facial expression, we combine the features of animation units (AUs) and feature point positions (FPPs) tracked by Kinect. A fusion algorithm based on improved emotional profiles (IEPs) and maximum confidence is proposed to recognize emotions with these real-time facial expression features. Experiments on both an emotion dataset and a real-time video show the superior performance of our method.

  8. Recognizing identity in the face of change: the development of an expression-independent representation of facial identity.

    Science.gov (United States)

    Mian, Jasmine F; Mondloch, Catherine J

    2012-07-30

    Perceptual aftereffects have indicated that there is an asymmetry in the extent to which adults' representations of identity and expression are independent of one another. Their representation of expression is identity-dependent; the magnitude of expression aftereffects is reduced when the adaptation and test stimuli have different identities. In contrast, their representation of identity is expression-independent; the magnitude of identity aftereffects is independent of whether the adaptation and test stimuli pose the same expressions. Like adults, children's representation of expression is identity-dependent (Vida & Mondloch, 2009). Here we investigated whether they have an expression-dependent representation of facial identity. Adults and 8-year-olds (n = 20 per group) categorized faces in an identity continuum (Sue/Jen) after viewing an adapting stimulus that displayed the same or a different emotional expression. Both groups showed identity aftereffects that were not influenced by facial expression. We conclude that, like adults, 8-year-old children's representation of identity is expression-independent.

  9. Reconstructing dynamic mental models of facial expressions in prosopagnosia reveals distinct representations for identity and expression.

    Science.gov (United States)

    Richoz, Anne-Raphaëlle; Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G; Caldara, Roberto

    2015-04-01

    The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits

  10. Anodal tDCS targeting the right orbitofrontal cortex enhances facial expression recognition.

    Science.gov (United States)

    Willis, Megan L; Murphy, Jillian M; Ridley, Nicole J; Vercammen, Ans

    2015-12-01

    The orbitofrontal cortex (OFC) has been implicated in the capacity to accurately recognise facial expressions. The aim of the current study was to determine if anodal transcranial direct current stimulation (tDCS) targeting the right OFC in healthy adults would enhance facial expression recognition, compared with a sham condition. Across two counterbalanced sessions of tDCS (i.e. anodal and sham), 20 undergraduate participants (18 female) completed a facial expression labelling task comprising angry, disgusted, fearful, happy, sad and neutral expressions, and a control (social judgement) task comprising the same expressions. Responses on the labelling task were scored for accuracy, median reaction time and overall efficiency (i.e. combined accuracy and reaction time). Anodal tDCS targeting the right OFC enhanced facial expression recognition, reflected in greater efficiency and speed of recognition across emotions, relative to the sham condition. In contrast, there was no effect of tDCS to responses on the control task. This is the first study to demonstrate that anodal tDCS targeting the right OFC boosts facial expression recognition. This finding provides a solid foundation for future research to examine the efficacy of this technique as a means to treat facial expression recognition deficits, particularly in individuals with OFC damage or dysfunction.

  11. Facial Expression Recognition Based on Features Derived From the Distinct LBP and GLCM

    Directory of Open Access Journals (Sweden)

    Gorti Satyanarayana Murty

    2014-01-01

    Full Text Available Automatic recognition of facial expressions can be an important component of natural human-machine interfaces; it may also be used in behavioural science and in clinical practice. Although humans recognise facial expressions virtually without effort or delay, reliable expression recognition by machine is still a challenge. This paper, presents recognition of facial expression by integrating the features derived from Grey Level Co-occurrence Matrix (GLCM with a new structural approach derived from distinct LBP’s (DLBP ona 3 x 3 First order Compressed Image (FCI. The proposed method precisely recognizes the 7 categories of expressions i.e.: neutral, happiness, sadness, surprise, anger, disgust and fear. The proposed method contains three phases. In the first phase each 5 x 5 sub image is compressed into a 3 x 3 sub image. The second phase derives two distinct LBP’s (DLBP using the Triangular patterns between the upper and lower parts of the 3 x 3 sub image. In the third phase GLCM is constructed based on the DLBP’s and feature parameters are evaluated for precise facial expression recognition. The derived DLBP is effective because it integrated with GLCM and provides better classification performance. The proposed method overcomes the disadvantages of statistical and formal LBP methods in estimating the facial expressions. The experimental results demonstrate the effectiveness of the proposed method on facial expression recognition.

  12. Theory of mind and recognition of facial emotion in dementia: challenge to current concepts.

    Science.gov (United States)

    Freedman, Morris; Binns, Malcolm A; Black, Sandra E; Murphy, Cara; Stuss, Donald T

    2013-01-01

    Current literature suggests that theory of mind (ToM) and recognition of facial emotion are impaired in behavioral variant frontotemporal dementia (bvFTD). In contrast, studies suggest that ToM is spared in Alzheimer disease (AD). However, there is controversy whether recognition of emotion in faces is impaired in AD. This study challenges the concepts that ToM is preserved in AD and that recognition of facial emotion is impaired in bvFTD. ToM, recognition of facial emotion, and identification of emotions associated with video vignettes were studied in bvFTD, AD, and normal controls. ToM was assessed using false-belief and visual perspective-taking tasks. Identification of facial emotion was tested using Ekman and Friesen's pictures of facial affect. After adjusting for relevant covariates, there were significant ToM deficits in bvFTD and AD compared with controls, whereas neither group was impaired in the identification of emotions associated with video vignettes. There was borderline impairment in recognizing angry faces in bvFTD. Patients with AD showed significant deficits on false belief and visual perspective taking, and bvFTD patients were impaired on second-order false belief. We report novel findings challenging the concepts that ToM is spared in AD and that recognition of facial emotion is impaired in bvFTD.

  13. Mapping correspondence between facial mimicry and emotion recognition in healthy subjects.

    Science.gov (United States)

    Ponari, Marta; Conson, Massimiliano; D'Amico, Nunzia Pina; Grossi, Dario; Trojano, Luigi

    2012-12-01

    We aimed at verifying the hypothesis that facial mimicry is causally and selectively involved in emotion recognition. For this purpose, in Experiment 1, we explored the effect of tonic contraction of muscles in upper or lower half of participants' face on their ability to recognize emotional facial expressions. We found that the "lower" manipulation specifically impaired recognition of happiness and disgust, the "upper" manipulation impaired recognition of anger, while both manipulations affected recognition of fear; recognition of surprise and sadness were not affected by either blocking manipulations. In Experiment 2, we verified whether emotion recognition is hampered by stimuli in which an upper or lower half-face showing an emotional expression is combined with a neutral half-face. We found that the neutral lower half-face interfered with recognition of happiness and disgust, whereas the neutral upper half impaired recognition of anger; recognition of fear and sadness was impaired by both manipulations, whereas recognition of surprise was not affected by either manipulation. Taken together, the present findings support simulation models of emotion recognition and provide insight into the role of mimicry in comprehension of others' emotional facial expressions.

  14. People with chronic facial pain perform worse than controls at a facial emotion recognition task, but it is not all about the emotion.

    Science.gov (United States)

    von Piekartz, H; Wallwork, S B; Mohr, G; Butler, D S; Moseley, G L

    2015-04-01

    Alexithymia, or a lack of emotional awareness, is prevalent in some chronic pain conditions and has been linked to poor recognition of others' emotions. Recognising others' emotions from their facial expression involves both emotional and motor processing, but the possible contribution of motor disruption has not been considered. It is possible that poor performance on emotional recognition tasks could reflect problems with emotional processing, motor processing or both. We hypothesised that people with chronic facial pain would be less accurate in recognising others' emotions from facial expressions, would be less accurate in a motor imagery task involving the face, and that performance on both tasks would be positively related. A convenience sample of 19 people (15 females) with chronic facial pain and 19 gender-matched controls participated. They undertook two tasks; in the first task, they identified the facial emotion presented in a photograph. In the second, they identified whether the person in the image had a facial feature pointed towards their left or right side, a well-recognised paradigm to induce implicit motor imagery. People with chronic facial pain performed worse than controls at both tasks (Facially Expressed Emotion Labelling (FEEL) task P facial pain were worse than controls at both the FEEL emotion recognition task and the left/right facial expression task and performance covaried within participants. We propose that disrupted motor processing may underpin or at least contribute to the difficulty that facial pain patients have in emotion recognition and that further research that tests this proposal is warranted.

  15. Facial emotion and identity processing development in 5- to 15-year-old children.

    Science.gov (United States)

    Johnston, Patrick J; Kaufman, Jordy; Bajic, Julie; Sercombe, Alicia; Michie, Patricia T; Karayanidis, Frini

    2011-01-01

    Most developmental studies of emotional face processing to date have focused on infants and very young children. Additionally, studies that examine emotional face processing in older children do not distinguish development in emotion and identity face processing from more generic age-related cognitive improvement. In this study, we developed a paradigm that measures processing of facial expression in comparison to facial identity and complex visual stimuli. The three matching tasks were developed (i.e., facial emotion matching, facial identity matching, and butterfly wing matching) to include stimuli of similar level of discriminability and to be equated for task difficulty in earlier samples of young adults. Ninety-two children aged 5-15 years and a new group of 24 young adults completed these three matching tasks. Young children were highly adept at the butterfly wing task relative to their performance on both face-related tasks. More importantly, in older children, development of facial emotion discrimination ability lagged behind that of facial identity discrimination.

  16. Facial Emotion and Identity Processing Development in 5- to 15-Year-Old Children

    Directory of Open Access Journals (Sweden)

    Patrick eJohnston

    2011-02-01

    Full Text Available Most developmental studies of emotional face processing to date have focussed on infants and very young children. Additionally, studies that examine emotional face processing in older children do not distinguish development in emotion and identity face processing from more generic age-related cognitive improvement. In this study, we developed a paradigm that measures processing of facial expression in comparison to facial identity and complex visual stimuli. The three matching tasks were developed (i.e., facial emotion matching, facial identity matching and butterfly wing matching to include stimuli of similar level of discriminability and to be equated for task difficulty in earlier samples of young adults. Ninety two children aged 5 to 15 years and a new group of 24 young adults completed these three matching tasks. Young children were highly adept at the butterfly wing task relative to their performance on both face-related tasks. More importantly, in older children, development of facial emotion discrimination ability lagged behind that of facial identity discrimination.

  17. The Moving Window Technique: A Window into Developmental Changes in Attention during Facial Emotion Recognition

    Science.gov (United States)

    Birmingham, Elina; Meixner, Tamara; Iarocci, Grace; Kanan, Christopher; Smilek, Daniel; Tanaka, James W.

    2013-01-01

    The strategies children employ to selectively attend to different parts of the face may reflect important developmental changes in facial emotion recognition. Using the Moving Window Technique (MWT), children aged 5-12 years and adults ("N" = 129) explored faces with a mouse-controlled window in an emotion recognition task. An…

  18. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    Science.gov (United States)

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  19. Facial expressions of emotions: recognition accuracy and affective reactions during late childhood.

    Science.gov (United States)

    Mancini, Giacomo; Agnoli, Sergio; Baldaro, Bruno; Bitti, Pio E Ricci; Surcinelli, Paola

    2013-01-01

    The present study examined the development of recognition ability and affective reactions to emotional facial expressions in a large sample of school-aged children (n = 504, ages 8-11 years of age). Specifically, the study aimed to investigate if changes in the emotion recognition ability and the affective reactions associated with the viewing of facial expressions occur during late childhood. Moreover, because small but robust gender differences during late-childhood have been proposed, the effects of gender on the development of emotion recognition and affective responses were examined. The results showed an overall increase in emotional face recognition ability from 8 to 11 years of age, particularly for neutral and sad expressions. However, the increase in sadness recognition was primarily due to the development of this recognition in boys. Moreover, our results indicate different developmental trends in males and females regarding the recognition of disgust. Last, developmental changes in affective reactions to emotional facial expressions were found. Whereas recognition ability increased over the developmental time period studied, affective reactions elicited by facial expressions were characterized by a decrease in arousal over the course of late childhood.

  20. Overview of impaired facial affect recognition in persons with traumatic brain injury.

    Science.gov (United States)

    Radice-Neumann, Dawn; Zupan, Barbra; Babbage, Duncan R; Willer, Barry

    2007-07-01

    To review the literature of affect recognition for persons with traumatic brain injury (TBI). It is suggested that impairment of affect recognition could be a significant problem for the TBI population and treatment strategies are recommended based on research for persons with autism. Research demonstrates that persons with TBI often have difficulty determining emotion from facial expressions. Studies show that poor interpersonal skills, which are associated with impaired affect recognition, are linked to a variety of negative outcomes. Theories suggest that facial affect recognition is achieved by interpreting important facial features and processing one's own emotions. These skills are often affected by TBI, depending on the areas damaged. Affect recognition impairments have also been identified in persons with autism. Successful interventions have already been developed for the autism population. Comparable neuroanatomical and behavioural findings between TBI and autism suggest that treatment approaches for autism may also benefit those with TBI. Impaired facial affect recognition appears to be a significant problem for persons with TBI. Theories of affect recognition, strategies used in autism and teaching techniques commonly used in TBI need to be considered when developing treatments to improve affect recognition in persons with brain injury.

  1. The Moving Window Technique: A Window into Developmental Changes in Attention during Facial Emotion Recognition

    Science.gov (United States)

    Birmingham, Elina; Meixner, Tamara; Iarocci, Grace; Kanan, Christopher; Smilek, Daniel; Tanaka, James W.

    2013-01-01

    The strategies children employ to selectively attend to different parts of the face may reflect important developmental changes in facial emotion recognition. Using the Moving Window Technique (MWT), children aged 5-12 years and adults ("N" = 129) explored faces with a mouse-controlled window in an emotion recognition task. An…

  2. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    Science.gov (United States)

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  3. Sad and happy facial emotion recognition impairment in progressive supranuclear palsy in comparison with Parkinson's disease.

    Science.gov (United States)

    Pontieri, Francesco E; Assogna, Francesca; Stefani, Alessandro; Pierantozzi, Mariangela; Meco, Giuseppe; Benincasa, Dario; Colosimo, Carlo; Caltagirone, Carlo; Spalletta, Gianfranco

    2012-08-01

    The severity of motor and non-motor symptoms of progressive supranuclear palsy (PSP) has a profound impact on social interactions of affected individuals and may, consequently, contribute to alter emotion recognition. Here we investigated facial emotion recognition impairment in PSP with respect to Parkinson's disease (PD), with the primary aim of outlining the differences between the two disorders. Moreover, we applied an intensity-dependent paradigm to examine the different threshold of encoding emotional faces in PSP and PD. The Penn emotion recognition test (PERT) was used to assess facial emotion recognition ability in PSP and PD patients. The 2 groups were matched for age, disease duration, global cognition, depression, anxiety, and daily L-Dopa intake. PSP patients displayed significantly lower recognition of sad and happy emotional faces with respect to PD ones. This applied to global recognition, as well as to low-intensity and high-intensity facial emotion recognition. These results indicate specific impairment of recognition of sad and happy facial emotions in PSP with respect to PD patients. The differences may depend upon diverse involvement of cortical-subcortical loops integrating emotional states and cognition between the two conditions, and might represent a neuropsychological correlate of the apathetic syndrome frequently encountered in PSP.

  4. The development of emotion recognition from facial expressions and non-linguistic vocalizations during childhood.

    Science.gov (United States)

    Chronaki, Georgia; Hadwin, Julie A; Garner, Matthew; Maurage, Pierre; Sonuga-Barke, Edmund J S

    2015-06-01

    Sensitivity to facial and vocal emotion is fundamental to children's social competence. Previous research has focused on children's facial emotion recognition, and few studies have investigated non-linguistic vocal emotion processing in childhood. We compared facial and vocal emotion recognition and processing biases in 4- to 11-year-olds and adults. Eighty-eight 4- to 11-year-olds and 21 adults participated. Participants viewed/listened to faces and voices (angry, happy, and sad) at three intensity levels (50%, 75%, and 100%). Non-linguistic tones were used. For each modality, participants completed an emotion identification task. Accuracy and bias for each emotion and modality were compared across 4- to 5-, 6- to 9- and 10- to 11-year-olds and adults. The results showed that children's emotion recognition improved with age; preschoolers were less accurate than other groups. Facial emotion recognition reached adult levels by 11 years, whereas vocal emotion recognition continued to develop in late childhood. Response bias decreased with age. For both modalities, sadness recognition was delayed across development relative to anger and happiness. The results demonstrate that developmental trajectories of emotion processing differ as a function of emotion type and stimulus modality. In addition, vocal emotion processing showed a more protracted developmental trajectory, compared to facial emotion processing. The results have important implications for programmes aiming to improve children's socio-emotional competence.

  5. Facial recognition and laser surface scan: a pilot study

    DEFF Research Database (Denmark)

    Lynnerup, Niels; Clausen, Maja-Lisa; Kristoffersen, Agnethe May

    2009-01-01

    Surface scanning of the face of a suspect is presented as a way to better match the facial features with those of a perpetrator from CCTV footage. We performed a simple pilot study where we obtained facial surface scans of volunteers and then in blind trials tried to match these scans with 2D...

  6. Feature Fusion Algorithm for Multimodal Emotion Recognition from Speech and Facial Expression Signal

    Directory of Open Access Journals (Sweden)

    Han Zhiyan

    2016-01-01

    Full Text Available In order to overcome the limitation of single mode emotion recognition. This paper describes a novel multimodal emotion recognition algorithm, and takes speech signal and facial expression signal as the research subjects. First, fuse the speech signal feature and facial expression signal feature, get sample sets by putting back sampling, and then get classifiers by BP neural network (BPNN. Second, measure the difference between two classifiers by double error difference selection strategy. Finally, get the final recognition result by the majority voting rule. Experiments show the method improves the accuracy of emotion recognition by giving full play to the advantages of decision level fusion and feature level fusion, and makes the whole fusion process close to human emotion recognition more, with a recognition rate 90.4%.

  7. Developmental changes in facial expression recognition in Japanese school-age children.

    Science.gov (United States)

    Naruse, Susumu; Hashimoto, Toshiaki; Mori, Kenji; Tsuda, Yoshimi; Takahara, Mitsue; Kagami, Shoji

    2013-01-01

    Facial expressions hold abundant information and play a central part in communication. In daily life, we must construct amicable interpersonal relationships by communicating through verbal and nonverbal behaviors. While school-age is a period of rapid social growth, few studies exist that study developmental changes in facial expression recognition during this age. This study investigated developmental changes in facial expression recognition by examining observers' gaze on others' expressions. 87 school-age children from first to sixth grade (41 boys, 46 girls). The Tobii T60 Eye-tracker(Tobii Technologies, Sweden) was used to gauge eye movement during a task of matching pre-instructed emotion words and facial expressions images (neutral, angry, happy, surprised, sad, disgusted) presented on a monitor fixed at a distance of 50 cm. In the task of matching the six facial expression images and emotion words, the mid- and higher-grade children answered more accurately than the lower-grade children in matching four expressions, excluding neutral and happy. For fixation time and fixation count, the lower-grade children scored lower than other grade children, gazing on all facial expressions significantly fewer times and for shorter periods. It is guessed that the stage from lower grades to middle grades is a turning point in facial recognition.

  8. Facial Expression Recognition Based on Local Binary Patterns and Kernel Discriminant Isomap

    Directory of Open Access Journals (Sweden)

    Xiaoming Zhao

    2011-10-01

    Full Text Available Facial expression recognition is an interesting and challenging subject. Considering the nonlinear manifold structure of facial images, a new kernel-based manifold learning method, called kernel discriminant isometric mapping (KDIsomap, is proposed. KDIsomap aims to nonlinearly extract the discriminant information by maximizing the interclass scatter while minimizing the intraclass scatter in a reproducing kernel Hilbert space. KDIsomap is used to perform nonlinear dimensionality reduction on the extracted local binary patterns (LBP facial features, and produce low-dimensional discrimimant embedded data representations with striking performance improvement on facial expression recognition tasks. The nearest neighbor classifier with the Euclidean metric is used for facial expression classification. Facial expression recognition experiments are performed on two popular facial expression databases, i.e., the JAFFE database and the Cohn-Kanade database. Experimental results indicate that KDIsomap obtains the best accuracy of 81.59% on the JAFFE database, and 94.88% on the Cohn-Kanade database. KDIsomap outperforms the other used methods such as principal component analysis (PCA, linear discriminant analysis (LDA, kernel principal component analysis (KPCA, kernel linear discriminant analysis (KLDA as well as kernel isometric mapping (KIsomap.

  9. Concurrent development of facial identity and expression discrimination

    National Research Council Canada - National Science Library

    Kirsten A Dalrymple; Visconti di Oleggio Castello; Jed T Elison; M Ida Gobbini

    2017-01-01

    ...). After a brief delay, the target face is replaced by two choice faces: 100% Identity A and 100% Identity B. Children 5-12-years-old were asked to pick the choice face that is most similar to the target identity...

  10. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    Science.gov (United States)

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc.

  11. Facial-affect recognition and visual scanning behaviour in the course of schizophrenia.

    Science.gov (United States)

    Streit, M; Wölwer, W; Gaebel, W

    1997-04-11

    The performance of schizophrenic in-patients in facial expression identification was assessed in an acute phase and in a partly remitted phase of the illness. During visual exploration of the face stimuli, the patient's eye movements were recorded using an infrared-corneal-reflection technique. Compared to healthy controls, patients demonstrated a significant deficit in facial-affect recognition. In addition, schizophrenics differed from controls in several eye movement parameters such as length of mean scan path and mean duration of fixation. Both the facial-affect recognition deficit and the eye movement abnormalities remained stable over time. However, performance in facial-affect recognition and eye movement abnormalities were not correlated. Patients with flattened affect showed relatively selective scan pattern characteristics. In contrast, affective flattening was not correlated with performance in facial-affect recognition. Dosage of neuroleptic medication did not affect the results. The main findings of the study suggest that schizophrenia is associated with disturbances in primarily unrelated neurocognitive operations mediating visuomotor processing and facial expression analysis. Given their time stability, the disturbances might have a trait-like character.

  12. Borrowed beauty? Understanding identity in Asian facial cosmetic surgery

    NARCIS (Netherlands)

    Aquino, Y.S.; Steinkamp, N.L.

    2016-01-01

    This review aims to identify (1) sources of knowledge and (2) important themes of the ethical debate related to surgical alteration of facial features in East Asians. This article integrates narrative and systematic review methods. In March 2014, we searched databases including PubMed, Philosopher's

  13. Pose-variant facial expression recognition using an embedded image system

    Science.gov (United States)

    Song, Kai-Tai; Han, Meng-Ju; Chang, Shuo-Hung

    2008-12-01

    In recent years, one of the most attractive research areas in human-robot interaction is automated facial expression recognition. Through recognizing the facial expression, a pet robot can interact with human in a more natural manner. In this study, we focus on the facial pose-variant problem. A novel method is proposed in this paper to recognize pose-variant facial expressions. After locating the face position in an image frame, the active appearance model (AAM) is applied to track facial features. Fourteen feature points are extracted to represent the variation of facial expressions. The distance between feature points are defined as the feature values. These feature values are sent to a support vector machine (SVM) for facial expression determination. The pose-variant facial expression is classified into happiness, neutral, sadness, surprise or anger. Furthermore, in order to evaluate the performance for practical applications, this study also built a low resolution database (160x120 pixels) using a CMOS image sensor. Experimental results show that the recognition rate is 84% with the self-built database.

  14. Deficits in recognition, identification, and discrimination of facial emotions in patients with bipolar disorder

    Directory of Open Access Journals (Sweden)

    Adolfo Benito

    2013-12-01

    Full Text Available Objective: To analyze the recognition, identification, and discrimination of facial emotions in a sample of outpatients with bipolar disorder (BD. Methods: Forty-four outpatients with diagnosis of BD and 48 matched control subjects were selected. Both groups were assessed with tests for recognition (Emotion Recognition-40 - ER40, identification (Facial Emotion Identification Test - FEIT, and discrimination (Facial Emotion Discrimination Test - FEDT of facial emotions, as well as a theory of mind (ToM verbal test (Hinting Task. Differences between groups were analyzed, controlling the influence of mild depressive and manic symptoms. Results: Patients with BD scored significantly lower than controls on recognition (ER40, identification (FEIT, and discrimination (FEDT of emotions. Regarding the verbal measure of ToM, a lower score was also observed in patients compared to controls. Patients with mild syndromal depressive symptoms obtained outcomes similar to patients in euthymia. A significant correlation between FEDT scores and global functioning (measured by the Functioning Assessment Short Test, FAST was found. Conclusions: These results suggest that, even in euthymia, patients with BD experience deficits in recognition, identification, and discrimination of facial emotions, with potential functional implications.

  15. Predictive codes of familiarity and context during the perceptual learning of facial identities.

    Science.gov (United States)

    Apps, Matthew A J; Tsakiris, Manos

    2013-01-01

    Face recognition is a key component of successful social behaviour. However, the computational processes that underpin perceptual learning and recognition as faces transition from unfamiliar to familiar are poorly understood. In predictive coding, learning occurs through prediction errors that update stimulus familiarity, but recognition is a function of both stimulus and contextual familiarity. Here we show that behavioural responses on a two-option face recognition task can be predicted by the level of contextual and facial familiarity in a computational model derived from predictive-coding principles. Using fMRI, we show that activity in the superior temporal sulcus varies with the contextual familiarity in the model, whereas activity in the fusiform face area covaries with the prediction error parameter that updated facial familiarity. Our results characterize the key computations underpinning the perceptual learning of faces, highlighting that the functional properties of face-processing areas conform to the principles of predictive coding.

  16. Facial Affect Recognition Training Through Telepractice: Two Case Studies of Individuals with Chronic Traumatic Brain Injury

    OpenAIRE

    John Williamson; Emi Isaki

    2015-01-01

    The use of a modified Facial Affect Recognition (FAR) training to identify emotions was investigated with two case studies of adults with moderate to severe chronic (> five years) traumatic brain injury (TBI).  The modified FAR training was administered via telepractice to target social communication skills.  Therapy consisted of identifying emotions through static facial expressions, personally reflecting on those emotions, and identifying sarcasm and emotions within social stories and ro...

  17. Facial Affect Recognition Training Through Telepractice: Two Case Studies of Individuals with Chronic Traumatic Brain Injury

    OpenAIRE

    Williamson, John; ISAKI, EMI

    2015-01-01

    The use of a modified Facial Affect Recognition (FAR) training to identify emotions was investigated with two case studies of adults with moderate to severe chronic (> five years) traumatic brain injury (TBI). The modified FAR training was administered via telepractice to target social communication skills. Therapy consisted of identifying emotions through static facial expressions, personally reflecting on those emotions, and identifying sarcasm and emotions within social stories and role-pl...

  18. Empathy, but not mimicry restriction, influences the recognition of change in emotional facial expressions.

    Science.gov (United States)

    Kosonogov, Vladimir; Titova, Alisa; Vorobyeva, Elena

    2015-01-01

    The current study addressed the hypothesis that empathy and the restriction of facial muscles of observers can influence recognition of emotional facial expressions. A sample of 74 participants recognized the subjective onset of emotional facial expressions (anger, disgust, fear, happiness, sadness, surprise, and neutral) in a series of morphed face photographs showing a gradual change (frame by frame) from one expression to another. The high-empathy (as measured by the Empathy Quotient) participants recognized emotional facial expressions at earlier photographs from the series than did low-empathy ones, but there was no difference in the exploration time. Restriction of facial muscles of observers (with plasters and a stick in mouth) did not influence the responses. We discuss these findings in the context of the embodied simulation theory and previous data on empathy.

  19. Automatic facial feature extraction and expression recognition based on neural network

    CERN Document Server

    Khandait, S P; Khandait, P D

    2012-01-01

    In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image processing operations are used. Permanent facial features like eyebrows, eyes, mouth and nose are extracted using SUSAN edge detection operator, facial geometry, edge projection analysis. Experiments are carried out on JAFFE facial expression database and gives better performance in terms of 100% accuracy for training set and 95.26% accuracy for test set.

  20. Facial emotion recognition in Williams syndrome and Down syndrome: A matching and developmental study.

    Science.gov (United States)

    Martínez-Castilla, Pastora; Burt, Michael; Borgatti, Renato; Gagliardi, Chiara

    2015-01-01

    In this study both the matching and developmental trajectories approaches were used to clarify questions that remain open in the literature on facial emotion recognition in Williams syndrome (WS) and Down syndrome (DS). The matching approach showed that individuals with WS or DS exhibit neither proficiency for the expression of happiness nor specific impairments for negative emotions. Instead, they present the same pattern of emotion recognition as typically developing (TD) individuals. Thus, the better performance on the recognition of positive compared to negative emotions usually reported in WS and DS is not specific of these populations but seems to represent a typical pattern. Prior studies based on the matching approach suggested that the development of facial emotion recognition is delayed in WS and atypical in DS. Nevertheless, and even though performance levels were lower in DS than in WS, the developmental trajectories approach used in this study evidenced that not only individuals with DS but also those with WS present atypical development in facial emotion recognition. Unlike in the TD participants, where developmental changes were observed along with age, in the WS and DS groups, the development of facial emotion recognition was static. Both individuals with WS and those with DS reached an early maximum developmental level due to cognitive constraints.

  1. Facial emotion recognition, face scan paths, and face perception in children with neurofibromatosis type 1.

    Science.gov (United States)

    Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M

    2017-05-01

    This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Discriminative shared Gaussian processes for multiview and view-invariant facial expression recognition.

    Science.gov (United States)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2015-01-01

    Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers learned separately for each view or a single classifier learned for all views. However, these approaches ignore the fact that different views of a facial expression are just different manifestations of the same facial expression. By accounting for this redundancy, we can design more effective classifiers for the target task. To this end, we propose a discriminative shared Gaussian process latent variable model (DS-GPLVM) for multiview and view-invariant classification of facial expressions from multiple views. In this model, we first learn a discriminative manifold shared by multiple views of a facial expression. Subsequently, we perform facial expression classification in the expression manifold. Finally, classification of an observed facial expression is carried out either in the view-invariant manner (using only a single view of the expression) or in the multiview manner (using multiple views of the expression). The proposed model can also be used to perform fusion of different facial features in a principled manner. We validate the proposed DS-GPLVM on both posed and spontaneously displayed facial expressions from three publicly available datasets (MultiPIE, labeled face parts in the wild, and static facial expressions in the wild). We show that this model outperforms the state-of-the-art methods for multiview and view-invariant facial expression classification, and several state-of-the-art methods for multiview learning and feature fusion.

  3. Event-related theta synchronization predicts deficit in facial affect recognition in schizophrenia.

    Science.gov (United States)

    Csukly, Gábor; Stefanics, Gábor; Komlósi, Sarolta; Czigler, István; Czobor, Pál

    2014-02-01

    Growing evidence suggests that abnormalities in the synchronized oscillatory activity of neurons in schizophrenia may lead to impaired neural activation and temporal coding and thus lead to neurocognitive dysfunctions, such as deficits in facial affect recognition. To gain an insight into the neurobiological processes linked to facial affect recognition, we investigated both induced and evoked oscillatory activity by calculating the Event Related Spectral Perturbation (ERSP) and the Inter Trial Coherence (ITC) during facial affect recognition. Fearful and neutral faces as well as nonface patches were presented to 24 patients with schizophrenia and 24 matched healthy controls while EEG was recorded. The participants' task was to recognize facial expressions. Because previous findings with healthy controls showed that facial feature decoding was associated primarily with oscillatory activity in the theta band, we analyzed ERSP and ITC in this frequency band in the time interval of 140-200 ms, which corresponds to the N170 component. Event-related theta activity and phase-locking to facial expressions, but not to nonface patches, predicted emotion recognition performance in both controls and patients. Event-related changes in theta amplitude and phase-locking were found to be significantly weaker in patients compared with healthy controls, which is in line with previous investigations showing decreased neural synchronization in the low frequency bands in patients with schizophrenia. Neural synchrony is thought to underlie distributed information processing. Our results indicate a less effective functioning in the recognition process of facial features, which may contribute to a less effective social cognition in schizophrenia.

  4. Recognition of Facial Expressions of Different Emotional Intensities in Patients with Frontotemporal Lobar Degeneration

    Directory of Open Access Journals (Sweden)

    Roy P. C. Kessels

    2007-01-01

    Full Text Available Behavioural problems are a key feature of frontotemporal lobar degeneration (FTLD. Also, FTLD patients show impairments in emotion processing. Specifically, the perception of negative emotional facial expressions is affected. Generally, however, negative emotional expressions are regarded as more difficult to recognize than positive ones, which thus may have been a confounding factor in previous studies. Also, ceiling effects are often present on emotion recognition tasks using full-blown emotional facial expressions. In the present study with FTLD patients, we examined the perception of sadness, anger, fear, happiness, surprise and disgust at different emotional intensities on morphed facial expressions to take task difficulty into account. Results showed that our FTLD patients were specifically impaired at the recognition of the emotion anger. Also, the patients performed worse than the controls on recognition of surprise, but performed at control levels on disgust, happiness, sadness and fear. These findings corroborate and extend previous results showing deficits in emotion perception in FTLD.

  5. Feature Extraction for Facial Expression Recognition based on Hybrid Face Regions

    Directory of Open Access Journals (Sweden)

    LAJEVARDI, S.M.

    2009-10-01

    Full Text Available Facial expression recognition has numerous applications, including psychological research, improved human computer interaction, and sign language translation. A novel facial expression recognition system based on hybrid face regions (HFR is investigated. The expression recognition system is fully automatic, and consists of the following modules: face detection, facial detection, feature extraction, optimal features selection, and classification. The features are extracted from both whole face image and face regions (eyes and mouth using log Gabor filters. Then, the most discriminate features are selected based on mutual information criteria. The system can automatically recognize six expressions: anger, disgust, fear, happiness, sadness and surprise. The selected features are classified using the Naive Bayesian (NB classifier. The proposed method has been extensively assessed using Cohn-Kanade database and JAFFE database. The experiments have highlighted the efficiency of the proposed HFR method in enhancing the classification rate.

  6. Robust Facial Expression Recognition via Sparse Representation and Multiple Gabor filters

    Directory of Open Access Journals (Sweden)

    Rania Salah El-Sayed

    2013-04-01

    Full Text Available Facial expressions recognition plays important role in human communication. It has become one of the most challenging tasks in the pattern recognition field. It has many applications such as: human computer interaction, video surveillance, forensic applications, criminal investigations, and in many other fields. In this paper we propose a method for facial expression recognition (FER. This method provides new insights into two issues in FER: feature extraction and robustness. For feature extraction we are using sparse representation approach after applying multiple Gabor filter and then using support vector machine (SVM as classifier. We conduct extensive experiments on standard facial expressions database to verify the performance of proposed method. And we compare the result with other approach.

  7. Algorithms for Facial Expression Action Tracking and Facial Expression Recognition%人脸表情运动跟踪与表情识别算法

    Institute of Scientific and Technical Information of China (English)

    李於俊; 汪增福

    2011-01-01

    对于人脸视频中的每一帧,提出一种静态人脸表情识别算法,人脸表情运动参数被提取出来后,根据表情生理知识来分类表情;为了应对知识的不足,提出一种静态表情识别和动态表情识别相结合的算法,以基于多类表情马尔可夫链和粒子滤波的统计框架结合生理知识来同时提取人脸表情运动和识别表情.实验证明了算法的有效性.%For each frame in the facial video sequence, an algorithm for static facial expression recognition is proposed firstly, facial expression is recognized after facial actions are retrieved according to facial expression knowledge. Coping with lacking of knowledge , an algorithm combining static facial expression recognition and dynamic facial expression recognition is proposed, facial actions as well as facial expression are simultaneously retrieved using a stochastic framework based on multi-class expressional Markov chains, particle filter and facial expression knowledge. Experiment result confirms the effective of these algorithms.

  8. Automated Facial Expression Recognition Using Gradient-Based Ternary Texture Patterns

    Directory of Open Access Journals (Sweden)

    Faisal Ahmed

    2013-01-01

    Full Text Available Recognition of human expression from facial image is an interesting research area, which has received increasing attention in the recent years. A robust and effective facial feature descriptor is the key to designing a successful expression recognition system. Although much progress has been made, deriving a face feature descriptor that can perform consistently under changing environment is still a difficult and challenging task. In this paper, we present the gradient local ternary pattern (GLTP—a discriminative local texture feature for representing facial expression. The proposed GLTP operator encodes the local texture of an image by computing the gradient magnitudes of the local neighborhood and quantizing those values in three discrimination levels. The location and occurrence information of the resulting micropatterns is then used as the face feature descriptor. The performance of the proposed method has been evaluated for the person-independent face expression recognition task. Experiments with prototypic expression images from the Cohn-Kanade (CK face expression database validate that the GLTP feature descriptor can effectively encode the facial texture and thus achieves improved recognition performance than some well-known appearance-based facial features.

  9. Children's Representations of Facial Expression and Identity: Identity-Contingent Expression Aftereffects

    Science.gov (United States)

    Vida, Mark D.; Mondloch, Catherine J.

    2009-01-01

    This investigation used adaptation aftereffects to examine developmental changes in the perception of facial expressions. Previous studies have shown that adults' perceptions of ambiguous facial expressions are biased following adaptation to intense expressions. These expression aftereffects are strong when the adapting and probe expressions share…

  10. Binary pattern flavored feature extractors for Facial Expression Recognition: An overview

    DEFF Research Database (Denmark)

    Kristensen, Rasmus Lyngby; Tan, Zheng-Hua; Ma, Zhanyu

    2015-01-01

    This paper conducts a survey of modern binary pattern flavored feature extractors applied to the Facial Expression Recognition (FER) problem. In total, 26 different feature extractors are included, of which six are selected for in depth description. In addition, the paper unifies important FER...... terminology, describes open challenges, and provides recommendations to scientific evaluation of FER systems. Lastly, it studies the facial expression recognition accuracy and blur invariance of the Local Frequency Descriptor. The paper seeks to bring together disjointed studies, and the main contribution...

  11. Emotion Index of Cover Song Music Video Clips based on Facial Expression Recognition

    DEFF Research Database (Denmark)

    Vidakis, Nikolaos; Kavallakis, George; Triantafyllidis, Georgios

    2017-01-01

    This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use...... of a neural network system using the features extracted by the SIFT algorithm. Also we support the need of this fusion of different expression recognition algorithms, because of the way that emotions are linked to facial expressions in music video clips....

  12. Facial Analysis: Looking at Biometric Recognition and Genome-Wide Association

    DEFF Research Database (Denmark)

    Fagertun, Jens

    The goal of this Ph.D. project is to present selected challenges regarding facial analysis within the fields of Human Biometrics and Human Genetics. In the course of the Ph.D. nine papers have been produced, eight of which have been included in this thesis. Three of the papers focus on face...... and gender recognition, where in the gender recognition papers the process of human perception of gender is analyzed and used to improve machine learning algorithms. One paper addresses the issues of variability in human annotation of facial landmarks, which most papers regard as a static “gold standard...

  13. Age, gender, and puberty influence the development of facial emotion recognition.

    Science.gov (United States)

    Lawrence, Kate; Campbell, Ruth; Skuse, David

    2015-01-01

    Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children's ability to recognize simple emotions is modulated by chronological age, pubertal stage and gender. In order to establish norms, we assessed 478 children aged 6-16 years, using the Ekman-Friesen Pictures of Facial Affect. We then modeled these cross-sectional data in terms of competence in accurate recognition of the six emotions studied, when the positive correlation between emotion recognition and IQ was controlled. Significant linear trends were seen in children's ability to recognize facial expressions of happiness, surprise, fear, and disgust; there was improvement with increasing age. In contrast, for sad and angry expressions there is little or no change in accuracy over the age range 6-16 years; near-adult levels of competence are established by middle-childhood. In a sampled subset, pubertal status influenced the ability to recognize facial expressions of disgust and anger; there was an increase in competence from mid to late puberty, which occurred independently of age. A small female advantage was found in the recognition of some facial expressions. The normative data provided in this study will aid clinicians and researchers in assessing the emotion recognition abilities of children and will facilitate the identification of abnormalities in a skill that is often impaired in neurodevelopmental disorders. If emotion recognition abilities are a good model with which to understand adolescent development, then these results could have implications for the education, mental health provision and legal treatment of teenagers.

  14. Age, gender and puberty influence the development of facial emotion recognition

    Directory of Open Access Journals (Sweden)

    Kate eLawrence

    2015-06-01

    Full Text Available Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children’s ability to recognise simple emotions is modulated by chronological age, pubertal stage and gender. In order to establish norms, we assessed 478 children aged 6-16 years, using the Ekman-Friesen Pictures of Facial Affect. We then modelled these cross-sectional data in terms of competence in accurate recognition of the six emotions studied, when the positive correlation between emotion recognition and IQ was controlled. Significant linear trends were seen in children’s ability to recognise facial expressions of happiness, surprise, fear and disgust; there was improvement with increasing age. In contrast, for sad and angry expressions there is little or no change in accuracy over the age range 6-16 years; near-adult levels of competence are established by middle-childhood. In a sampled subset, pubertal status influenced the ability to recognize facial expressions of disgust and anger; there was an increase in competence from mid to late puberty, which occurred independently of age. A small female advantage was found in the recognition of some facial expressions. The normative data provided in this study will aid clinicians and researchers in assessing the emotion recognition abilities of children and will facilitate the identification of abnormalities in a skill that is often impaired in neurodevelopmental disorders. If emotion recognition abilities are a good model with which to understand adolescent development, then these results could have implications for the education, mental health provision and legal treatment of teenagers.

  15. Age, gender, and puberty influence the development of facial emotion recognition

    Science.gov (United States)

    Lawrence, Kate; Campbell, Ruth; Skuse, David

    2015-01-01

    Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children’s ability to recognize simple emotions is modulated by chronological age, pubertal stage and gender. In order to establish norms, we assessed 478 children aged 6–16 years, using the Ekman-Friesen Pictures of Facial Affect. We then modeled these cross-sectional data in terms of competence in accurate recognition of the six emotions studied, when the positive correlation between emotion recognition and IQ was controlled. Significant linear trends were seen in children’s ability to recognize facial expressions of happiness, surprise, fear, and disgust; there was improvement with increasing age. In contrast, for sad and angry expressions there is little or no change in accuracy over the age range 6–16 years; near-adult levels of competence are established by middle-childhood. In a sampled subset, pubertal status influenced the ability to recognize facial expressions of disgust and anger; there was an increase in competence from mid to late puberty, which occurred independently of age. A small female advantage was found in the recognition of some facial expressions. The normative data provided in this study will aid clinicians and researchers in assessing the emotion recognition abilities of children and will facilitate the identification of abnormalities in a skill that is often impaired in neurodevelopmental disorders. If emotion recognition abilities are a good model with which to understand adolescent development, then these results could have implications for the education, mental health provision and legal treatment of teenagers. PMID:26136697

  16. Recognition of facial expressions of emotion in panic disorder.

    Science.gov (United States)

    Cai, Liqiang; Chen, Wanzhen; Shen, Yuedi; Wang, Xinling; Wei, Lili; Zhang, Yingchun; Wang, Wei; Chen, Wei

    2012-01-01

    Whether patients with panic disorder behave differently or not when recognizing the facial expressions of emotion remains unsettled. We tested 21 outpatients with panic disorder and 34 healthy subjects, with a photo set from the Matsumoto and Ekman Japanese and Caucasian facial expressions of emotion, which includes anger, contempt, disgust, fear, happiness, sadness, and surprise. Compared to the healthy subjects, patients showed lower accuracies when recognizing disgust and fear, but a higher accuracy when recognizing surprise. These results suggest that the altered specificity to these emotions leads tso self-awareness mechanisms to prevent further emotional reactions in panic disorder patients. Copyright © 2012 S. Karger AG, Basel.

  17. Oxytocin promotes facial emotion recognition and amygdala reactivity in adults with asperger syndrome.

    Science.gov (United States)

    Domes, Gregor; Kumbier, Ekkehardt; Heinrichs, Markus; Herpertz, Sabine C

    2014-02-01

    The neuropeptide oxytocin has recently been shown to enhance eye gaze and emotion recognition in healthy men. Here, we report a randomized double-blind, placebo-controlled trial that examined the neural and behavioral effects of a single dose of intranasal oxytocin on emotion recognition in individuals with Asperger syndrome (AS), a clinical condition characterized by impaired eye gaze and facial emotion recognition. Using functional magnetic resonance imaging, we examined whether oxytocin would enhance emotion recognition from facial sections of the eye vs the mouth region and modulate regional activity in brain areas associated with face perception in both adults with AS, and a neurotypical control group. Intranasal administration of the neuropeptide oxytocin improved performance in a facial emotion recognition task in individuals with AS. This was linked to increased left amygdala reactivity in response to facial stimuli and increased activity in the neural network involved in social cognition. Our data suggest that the amygdala, together with functionally associated cortical areas mediate the positive effect of oxytocin on social cognitive functioning in AS.

  18. Brain correlates of musical and facial emotion recognition: evidence from the dementias.

    Science.gov (United States)

    Hsieh, S; Hornberger, M; Piguet, O; Hodges, J R

    2012-07-01

    The recognition of facial expressions of emotion is impaired in semantic dementia (SD) and is associated with right-sided brain atrophy in areas known to be involved in emotion processing, notably the amygdala. Whether patients with SD also experience difficulty recognizing emotions conveyed by other media, such as music, is unclear. Prior studies have used excerpts of known music from classical or film repertoire but not unfamiliar melodies designed to convey distinct emotions. Patients with SD (n = 11), Alzheimer's disease (n = 12) and healthy control participants (n = 20) underwent tests of emotion recognition in two modalities: unfamiliar musical tunes and unknown faces as well as volumetric MRI. Patients with SD were most impaired with the recognition of facial and musical emotions, particularly for negative emotions. Voxel-based morphometry showed that the labelling of emotions, regardless of modality, correlated with the degree of atrophy in the right temporal pole, amygdala and insula. The recognition of musical (but not facial) emotions was also associated with atrophy of the left anterior and inferior temporal lobe, which overlapped with regions correlating with standardized measures of verbal semantic memory. These findings highlight the common neural substrates supporting the processing of emotions by facial and musical stimuli but also indicate that the recognition of emotions from music draws upon brain regions that are associated with semantics in language. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Sex Differences in Facial, Prosodic, and Social Context Emotional Recognition in Early-Onset Schizophrenia

    Directory of Open Access Journals (Sweden)

    Julieta Ramos-Loyo

    2012-01-01

    Full Text Available The purpose of the present study was to determine sex differences in facial, prosodic, and social context emotional recognition in schizophrenia (SCH. Thirty-eight patients (SCH, 20 females and 38 healthy controls (CON, 20 females participated in the study. Clinical scales (BPRS and PANSS and an Affective States Scale were applied, as well as tasks to evaluate facial, prosodic, and within a social context emotional recognition. SCH showed lower accuracy and longer response times than CON, but no significant sex differences were observed in either facial or prosody recognition. In social context emotions, however, females showed higher empathy than males with respect to happiness in both groups. SCH reported being more identified with sad films than CON and females more with fear than males. The results of this study confirm the deficits of emotional recognition in male and female patients with schizophrenia compared to healthy subjects. Sex differences were detected in relation to social context emotions and facial and prosodic recognition depending on age.

  20. EMOTION RECOGNITION OF VIRTUAL AGENTS FACIAL EXPRESSIONS: THE EFFECTS OF AGE AND EMOTION INTENSITY

    Science.gov (United States)

    Beer, Jenay M.; Fisk, Arthur D.; Rogers, Wendy A.

    2014-01-01

    People make determinations about the social characteristics of an agent (e.g., robot or virtual agent) by interpreting social cues displayed by the agent, such as facial expressions. Although a considerable amount of research has been conducted investigating age-related differences in emotion recognition of human faces (e.g., Sullivan, & Ruffman, 2004), the effect of age on emotion identification of virtual agent facial expressions has been largely unexplored. Age-related differences in emotion recognition of facial expressions are an important factor to consider in the design of agents that may assist older adults in a recreational or healthcare setting. The purpose of the current research was to investigate whether age-related differences in facial emotion recognition can extend to emotion-expressive virtual agents. Younger and older adults performed a recognition task with a virtual agent expressing six basic emotions. Larger age-related differences were expected for virtual agents displaying negative emotions, such as anger, sadness, and fear. In fact, the results indicated that older adults showed a decrease in emotion recognition accuracy for a virtual agent's emotions of anger, fear, and happiness. PMID:25552896

  1. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    Science.gov (United States)

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  2. Sex Differences in Facial, Prosodic, and Social Context Emotional Recognition in Early-Onset Schizophrenia

    Science.gov (United States)

    Ramos-Loyo, Julieta; Mora-Reynoso, Leonor; Sánchez-Loyo, Luis Miguel; Medina-Hernández, Virginia

    2012-01-01

    The purpose of the present study was to determine sex differences in facial, prosodic, and social context emotional recognition in schizophrenia (SCH). Thirty-eight patients (SCH, 20 females) and 38 healthy controls (CON, 20 females) participated in the study. Clinical scales (BPRS and PANSS) and an Affective States Scale were applied, as well as tasks to evaluate facial, prosodic, and within a social context emotional recognition. SCH showed lower accuracy and longer response times than CON, but no significant sex differences were observed in either facial or prosody recognition. In social context emotions, however, females showed higher empathy than males with respect to happiness in both groups. SCH reported being more identified with sad films than CON and females more with fear than males. The results of this study confirm the deficits of emotional recognition in male and female patients with schizophrenia compared to healthy subjects. Sex differences were detected in relation to social context emotions and facial and prosodic recognition depending on age. PMID:22970365

  3. Recognition of children on age-different images: Facial morphology and age-stable features.

    Science.gov (United States)

    Caplova, Zuzana; Compassi, Valentina; Giancola, Silvio; Gibelli, Daniele M; Obertová, Zuzana; Poppa, Pasquale; Sala, Remo; Sforza, Chiarella; Cattaneo, Cristina

    2017-07-01

    The situation of missing children is one of the most emotional social issues worldwide. The search for and identification of missing children is often hampered, among others, by the fact that the facial morphology of long-term missing children changes as they grow. Nowadays, the wide coverage by surveillance systems potentially provides image material for comparisons with images of missing children that may facilitate identification. The aim of study was to identify whether facial features are stable in time and can be utilized for facial recognition by comparing facial images of children at different ages as well as to test the possible use of moles in recognition. The study was divided into two phases (1) morphological classification of facial features using an Anthropological Atlas; (2) algorithm developed in MATLAB® R2014b for assessing the use of moles as age-stable features. The assessment of facial features by Anthropological Atlases showed high mismatch percentages among observers. On average, the mismatch percentages were lower for features describing shape than for those describing size. The nose tip cleft and the chin dimple showed the best agreement between observers regarding both categorization and stability over time. Using the position of moles as a reference point for recognition of the same person on age-different images seems to be a useful method in terms of objectivity and it can be concluded that moles represent age-stable facial features that may be considered for preliminary recognition. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  4. Recognition of Facial Expressions in Individuals with Elevated Levels of Depressive Symptoms: An Eye-Movement Study

    OpenAIRE

    2012-01-01

    Previous studies consistently reported abnormal recognition of facial expressions in depression. However, it is still not clear whether this abnormality is due to an enhanced or impaired ability to recognize facial expressions, and what underlying cognitive systems are involved. The present study aimed to examine how individuals with elevated levels of depressive symptoms differ from controls on facial expression recognition and to assess attention and information processing using eye trackin...

  5. High-Frequency Transcranial Random Noise Stimulation Enhances Perception of Facial Identity.

    Science.gov (United States)

    Romanska, Aleksandra; Rezlescu, Constantin; Susilo, Tirta; Duchaine, Bradley; Banissy, Michael J

    2015-11-01

    Recently, a number of studies have demonstrated the utility of transcranial current stimulation as a tool to facilitate a variety of cognitive and perceptual abilities. Few studies, though, have examined the utility of this approach for the processing of social information. Here, we conducted 2 experiments to explore whether a single session of high-frequency transcranial random noise stimulation (tRNS) targeted at lateral occipitotemporal cortices would enhance facial identity perception. In Experiment 1, participants received 20 min of active high-frequency tRNS or sham stimulation prior to completing the tasks examining facial identity perception or trustworthiness perception. Active high-frequency tRNS facilitated facial identity perception, but not trustworthiness perception. Experiment 2 assessed the spatial specificity of this effect by delivering 20 min of active high-frequency tRNS to lateral occipitotemporal cortices or sensorimotor cortices prior to participants completing the same facial identity perception task used in Experiment 1. High-frequency tRNS targeted at lateral occipitotemporal cortices enhanced performance relative to motor cortex stimulation. These findings show that high-frequency tRNS to lateral occipitotemporal cortices produces task-specific and site-specific enhancements in face perception.

  6. Non-suicidal self-injury and emotion regulation: a review on facial emotion recognition and facial mimicry

    Science.gov (United States)

    2013-01-01

    Non-suicidal self-injury (NSSI) is an increasingly prevalent, clinically significant behavior in adolescents and can be associated with serious consequences for the afflicted person. Emotion regulation is considered its most frequent function. Because the symptoms of NSSI are common and cause impairment, it will be included in Section 3 disorders as a new disorder in the revised Diagnostic and Statistical Manual of Mental Disorders (DSM-5). So far, research has been conducted mostly with patients with borderline personality disorder (BPD) showing self-injurious behavior. Therefore, for this review the current state of research regarding emotion regulation, NSSI, and BPD in adolescents is presented. In particular, the authors focus on studies on facial emotion recognition and facial mimicry, as social interaction difficulties might be a result of not recognizing emotions in facial expressions and inadequate facial mimicry. Although clinical trials investigating the efficacy of psychological treatments for NSSI among adolescents are lacking, especially those targeting the capacity to cope with emotions, clinical implications of the improvement in implicit and explicit emotion regulation in the treatment of NSSI is discussed. Given the impact of emotion regulation skills on the effectiveness of psychotherapy, neurobiological and psychophysiological outcome variables should be included in clinical trials. PMID:23421964

  7. Poor Facial Affect Recognition among Boys with Duchenne Muscular Dystrophy

    Science.gov (United States)

    Hinton, V. J.; Fee, R. J.; De Vivo, D. C.; Goldstein, E.

    2007-01-01

    Children with Duchenne or Becker muscular dystrophy (MD) have delayed language and poor social skills and some meet criteria for Pervasive Developmental Disorder, yet they are identified by molecular, rather than behavioral, characteristics. To determine whether comprehension of facial affect is compromised in boys with MD, children were given a…

  8. Poor Facial Affect Recognition among Boys with Duchenne Muscular Dystrophy

    Science.gov (United States)

    Hinton, V. J.; Fee, R. J.; De Vivo, D. C.; Goldstein, E.

    2007-01-01

    Children with Duchenne or Becker muscular dystrophy (MD) have delayed language and poor social skills and some meet criteria for Pervasive Developmental Disorder, yet they are identified by molecular, rather than behavioral, characteristics. To determine whether comprehension of facial affect is compromised in boys with MD, children were given a…

  9. Facial Expression Recognition Techniques Based on Bilinear Model%基于双线性模型的人脸表情识别技术

    Institute of Scientific and Technical Information of China (English)

    徐欢

    2014-01-01

    Aiming at the problems existing in facial expression recognition currently , based on the data in the 3D expression data-base BU-3DFE, we study the point cloud alignment of 3D facial expression data , establish the bilinear models based on the align-ment data , and improve the recognition algorithms based on bilinear model in order to form the new recognition and classification algorithms, to reduce the quantity of identity feature calculation in original algorithm , to minimize the influence of identity feature on the total expression recognition process , to improve the results of facial expression recognition , and to ultimately achieve the high robustness of 3D facial expression recognition .%针对现阶段人脸表情识别过程中所遇到的问题,基于三维数据库BU-3DFE中的三维表情数据,研究三维人脸表情数据的点云对齐及基于对齐数据的双线性模型建立,对基于双线性模型的识别算法加以改进,形成新的识别分类算法,降低原有算法中身份特征参与计算的比重,最大可能地降低身份特征对于整个表情识别过程的影响。旨在提高表情识别的结果,最终实现高鲁棒性的三维表情识别。

  10. I feel your fear: shared touch between faces facilitates recognition of fearful facial expressions.

    Science.gov (United States)

    Maister, Lara; Tsiakkas, Eleni; Tsakiris, Manos

    2013-02-01

    Embodied simulation accounts of emotion recognition claim that we vicariously activate somatosensory representations to simulate, and eventually understand, how others feel. Interestingly, mirror-touch synesthetes, who experience touch when observing others being touched, show both enhanced somatosensory simulation and superior recognition of emotional facial expressions. We employed synchronous visuotactile stimulation to experimentally induce a similar experience of "mirror touch" in nonsynesthetic participants. Seeing someone else's face being touched at the same time as one's own face results in the "enfacement illusion," which has been previously shown to blur self-other boundaries. We demonstrate that the enfacement illusion also facilitates emotion recognition, and, importantly, this facilitatory effect is specific to fearful facial expressions. Shared synchronous multisensory experiences may experimentally facilitate somatosensory simulation mechanisms involved in the recognition of fearful emotional expressions.

  11. Age, gender and puberty influence the development of facial emotion recognition

    OpenAIRE

    Lawrence, Kate; Campbell, Ruth; Skuse, David

    2015-01-01

    Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children’s ability to recognize simple emotions is modulated by chronological age, pubertal stage and gender. In or...

  12. Visual Scanning in the Recognition of Facial Affect in Traumatic Brain Injury

    Directory of Open Access Journals (Sweden)

    Suzane Vassallo

    2011-05-01

    Full Text Available We investigated the visual scanning strategy employed by a group of individuals with a severe traumatic brain injury (TBI during a facial affect recognition task. Four males with a severe TBI were matched for age and gender with 4 healthy controls. Eye movements were recorded while pictures of static emotional faces were viewed (i.e., sad, happy, angry, disgusted, anxious, surprised. Groups were compared with respect to accuracy in labelling the emotional facial expression, reaction time, number and duration of fixations to internal (i.e., eyes + nose + mouth, and external (i.e., all remaining regions of the stimulus. TBI participants demonstrated significantly reduced accuracy and increased latency in facial affect recognition. Further, they demonstrated no significant difference in the number or duration of fixations to internal versus external facial regions. Control participants, however, fixated more frequently and for longer periods of time upon internal facial features. Impaired visual scanning can contribute to inaccurate interpretation of facial expression and this can disrupt interpersonal communication. The scanning strategy demonstrated by our TBI group appears more ‘widespread’ than that employed by their normal counterparts. Further work is required to elucidate the nature of the scanning strategy used and its potential variance in TBI.

  13. Fitting the Child's Mind to the World: Adaptive Norm-Based Coding of Facial Identity in 8-Year-Olds

    Science.gov (United States)

    Nishimura, Mayu; Maurer, Daphne; Jeffery, Linda; Pellicano, Elizabeth; Rhodes, Gillian

    2008-01-01

    In adults, facial identity is coded by opponent processes relative to an average face or norm, as evidenced by the face identity aftereffect: adapting to a face biases perception towards the opposite identity, so that a previously neutral face (e.g. the average) resembles the identity of the computationally opposite face. We investigated whether…

  14. Modulation of α power and functional connectivity during facial affect recognition.

    Science.gov (United States)

    Popov, Tzvetan; Miller, Gregory A; Rockstroh, Brigitte; Weisz, Nathan

    2013-04-03

    Research has linked oscillatory activity in the α frequency range, particularly in sensorimotor cortex, to processing of social actions. Results further suggest involvement of sensorimotor α in the processing of facial expressions, including affect. The sensorimotor face area may be critical for perception of emotional face expression, but the role it plays is unclear. The present study sought to clarify how oscillatory brain activity contributes to or reflects processing of facial affect during changes in facial expression. Neuromagnetic oscillatory brain activity was monitored while 30 volunteers viewed videos of human faces that changed their expression from neutral to fearful, neutral, or happy expressions. Induced changes in α power during the different morphs, source analysis, and graph-theoretic metrics served to identify the role of α power modulation and cross-regional coupling by means of phase synchrony during facial affect recognition. Changes from neutral to emotional faces were associated with a 10-15 Hz power increase localized in bilateral sensorimotor areas, together with occipital power decrease, preceding reported emotional expression recognition. Graph-theoretic analysis revealed that, in the course of a trial, the balance between sensorimotor power increase and decrease was associated with decreased and increased transregional connectedness as measured by node degree. Results suggest that modulations in α power facilitate early registration, with sensorimotor cortex including the sensorimotor face area largely functionally decoupled and thereby protected from additional, disruptive input and that subsequent α power decrease together with increased connectedness of sensorimotor areas facilitates successful facial affect recognition.

  15. The mysterious noh mask: contribution of multiple facial parts to the recognition of emotional expressions.

    Directory of Open Access Journals (Sweden)

    Hiromitsu Miyata

    Full Text Available BACKGROUND: A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. METHODOLOGY/PRINCIPAL FINDINGS: In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. CONCLUSIONS/SIGNIFICANCE: The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the

  16. Maximized Posteriori Attributes Selection from Facial Salient Landmarks for Face Recognition

    CERN Document Server

    Gupta, Phalguni; Sing, Jamuna Kanta; Tistarelli, Massimo

    2010-01-01

    This paper presents a robust and dynamic face recognition technique based on the extraction and matching of devised probabilistic graphs drawn on SIFT features related to independent face areas. The face matching strategy is based on matching individual salient facial graph characterized by SIFT features as connected to facial landmarks such as the eyes and the mouth. In order to reduce the face matching errors, the Dempster-Shafer decision theory is applied to fuse the individual matching scores obtained from each pair of salient facial features. The proposed algorithm is evaluated with the ORL and the IITK face databases. The experimental results demonstrate the effectiveness and potential of the proposed face recognition technique also in case of partially occluded faces.

  17. 3D facial expression recognition based on histograms of surface differential quantities

    KAUST Repository

    Li, Huibin

    2011-01-01

    3D face models accurately capture facial surfaces, making it possible for precise description of facial activities. In this paper, we present a novel mesh-based method for 3D facial expression recognition using two local shape descriptors. To characterize shape information of the local neighborhood of facial landmarks, we calculate the weighted statistical distributions of surface differential quantities, including histogram of mesh gradient (HoG) and histogram of shape index (HoS). Normal cycle theory based curvature estimation method is employed on 3D face models along with the common cubic fitting curvature estimation method for the purpose of comparison. Based on the basic fact that different expressions involve different local shape deformations, the SVM classifier with both linear and RBF kernels outperforms the state of the art results on the subset of the BU-3DFE database with the same experimental setting. © 2011 Springer-Verlag.

  18. Psychopathic traits in adolescents and recognition of emotion in facial expressions

    Directory of Open Access Journals (Sweden)

    Silvio José Lemos Vasconcellos

    2014-12-01

    Full Text Available Recent studies have investigated the ability of adult psychopaths and children with psychopathy traits to identify specific facial expressions of emotion. Conclusive results have not yet been found regarding whether psychopathic traits are associated with a specific deficit in the ability of identifying negative emotions such as fear and sadness. This study compared 20 adolescents with psychopathic traits and 21 adolescents without these traits in terms of their ability to recognize facial expressions of emotion using facial stimuli presented during 200 milliseconds, 500 milliseconds, and 1 second expositions. Analyses indicated significant differences between the two groups' performances only for fear and when displayed for 200 ms. This finding is consistent with findings from other studies in the field and suggests that controlling the duration of exposure to affective stimuli in future studies may help to clarify the mechanisms underlying the facial affect recognition deficits of individuals with psychopathic traits.

  19. Facial Affect Recognition Training Through Telepractice: Two Case Studies of Individuals with Chronic Traumatic Brain Injury

    Directory of Open Access Journals (Sweden)

    John Williamson

    2015-07-01

    Full Text Available The use of a modified Facial Affect Recognition (FAR training to identify emotions was investigated with two case studies of adults with moderate to severe chronic (> five years traumatic brain injury (TBI.  The modified FAR training was administered via telepractice to target social communication skills.  Therapy consisted of identifying emotions through static facial expressions, personally reflecting on those emotions, and identifying sarcasm and emotions within social stories and role-play.  Pre- and post-therapy measures included static facial photos to identify emotion and the Prutting and Kirchner Pragmatic Protocol for social communication.  Both participants with chronic TBI showed gains on identifying facial emotions on the static photos.               

  20. Does Facial Expression Recognition Provide a Toehold for the Development of Emotion Understanding?

    Science.gov (United States)

    Strand, Paul S.; Downs, Andrew; Barbosa-Leiker, Celestina

    2016-01-01

    The authors explored predictions from basic emotion theory (BET) that facial emotion expression recognition skills are insular with respect to their own development, and yet foundational to the development of emotional perspective-taking skills. Participants included 417 preschool children for whom estimates of these 2 emotion understanding…

  1. Externalizing and Internalizing Symptoms Moderate Longitudinal Patterns of Facial Emotion Recognition in Autism Spectrum Disorder

    Science.gov (United States)

    Rosen, Tamara E.; Lerner, Matthew D.

    2016-01-01

    Facial emotion recognition (FER) is thought to be a key deficit domain in autism spectrum disorder (ASD). However, the extant literature is based solely on cross-sectional studies; thus, little is known about even short-term intra-individual dynamics of FER in ASD over time. The present study sought to examine trajectories of FER in ASD youth over…

  2. Facial Expression Recognition: Can Preschoolers with Cochlear Implants and Hearing Aids Catch It?

    Science.gov (United States)

    Wang, Yifang; Su, Yanjie; Fang, Ping; Zhou, Qingxia

    2011-01-01

    Tager-Flusberg and Sullivan (2000) presented a cognitive model of theory of mind (ToM), in which they thought ToM included two components--a social-perceptual component and a social-cognitive component. Facial expression recognition (FER) is an ability tapping the social-perceptual component. Previous findings suggested that normal hearing…

  3. Face Processing and Facial Emotion Recognition in Adults with Down Syndrome

    Science.gov (United States)

    Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial

    2008-01-01

    Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…

  4. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    Science.gov (United States)

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…

  5. Static and dynamic 3D facial expression recognition: A comprehensive survey

    NARCIS (Netherlands)

    Sandbach, G.; Zafeiriou, S.; Pantic, Maja; Yin, Lijun

    2012-01-01

    Automatic facial expression recognition constitutes an active research field due to the latest advances in computing technology that make the user's experience a clear priority. The majority of work conducted in this area involves 2D imagery, despite the problems this presents due to inherent pose a

  6. Static and dynamic 3D facial expression recognition: A comprehensive survey

    NARCIS (Netherlands)

    Sandbach, G.; Zafeiriou, S.; Pantic, Maja; Yin, Lijun

    2012-01-01

    Automatic facial expression recognition constitutes an active research field due to the latest advances in computing technology that make the user's experience a clear priority. The majority of work conducted in this area involves 2D imagery, despite the problems this presents due to inherent pose

  7. Facial Expression Recognition: Can Preschoolers with Cochlear Implants and Hearing Aids Catch It?

    Science.gov (United States)

    Wang, Yifang; Su, Yanjie; Fang, Ping; Zhou, Qingxia

    2011-01-01

    Tager-Flusberg and Sullivan (2000) presented a cognitive model of theory of mind (ToM), in which they thought ToM included two components--a social-perceptual component and a social-cognitive component. Facial expression recognition (FER) is an ability tapping the social-perceptual component. Previous findings suggested that normal hearing…

  8. Does Facial Expression Recognition Provide a Toehold for the Development of Emotion Understanding?

    Science.gov (United States)

    Strand, Paul S.; Downs, Andrew; Barbosa-Leiker, Celestina

    2016-01-01

    The authors explored predictions from basic emotion theory (BET) that facial emotion expression recognition skills are insular with respect to their own development, and yet foundational to the development of emotional perspective-taking skills. Participants included 417 preschool children for whom estimates of these 2 emotion understanding…

  9. Face Processing and Facial Emotion Recognition in Adults with Down Syndrome

    Science.gov (United States)

    Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial

    2008-01-01

    Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…

  10. Facial Emotion Recognition in Children with High Functioning Autism and Children with Social Phobia

    Science.gov (United States)

    Wong, Nina; Beidel, Deborah C.; Sarver, Dustin E.; Sims, Valerie

    2012-01-01

    Recognizing facial affect is essential for effective social functioning. This study examines emotion recognition abilities in children aged 7-13 years with High Functioning Autism (HFA = 19), Social Phobia (SP = 17), or typical development (TD = 21). Findings indicate that all children identified certain emotions more quickly (e.g., happy [less…

  11. Externalizing and Internalizing Symptoms Moderate Longitudinal Patterns of Facial Emotion Recognition in Autism Spectrum Disorder

    Science.gov (United States)

    Rosen, Tamara E.; Lerner, Matthew D.

    2016-01-01

    Facial emotion recognition (FER) is thought to be a key deficit domain in autism spectrum disorder (ASD). However, the extant literature is based solely on cross-sectional studies; thus, little is known about even short-term intra-individual dynamics of FER in ASD over time. The present study sought to examine trajectories of FER in ASD youth over…

  12. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    Science.gov (United States)

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…

  13. A novel dataset for real-life evaluation of facial expression recognition methodologies

    NARCIS (Netherlands)

    Siddiqi, Muhammad Hameed; Ali, Maqbool; Idris, Muhammad; Banos, Oresti; Lee, Sungyoung; Choo, Hyunseung

    2016-01-01

    One limitation seen among most of the previous methods is that they were evaluated under settings that are far from real-life scenarios. The reason is that the existing facial expression recognition (FER) datasets are mostly pose-based and assume a predefined setup. The expressions in these datasets

  14. Facial Emotion Recognition in Children with High Functioning Autism and Children with Social Phobia

    Science.gov (United States)

    Wong, Nina; Beidel, Deborah C.; Sarver, Dustin E.; Sims, Valerie

    2012-01-01

    Recognizing facial affect is essential for effective social functioning. This study examines emotion recognition abilities in children aged 7-13 years with High Functioning Autism (HFA = 19), Social Phobia (SP = 17), or typical development (TD = 21). Findings indicate that all children identified certain emotions more quickly (e.g., happy [less…

  15. The Effect of Gender and Age Differences on the Recognition of Emotions from Facial Expressions

    DEFF Research Database (Denmark)

    Schneevogt, Daniela; Paggio, Patrizia

    2016-01-01

    subjects. We conduct an emotion recognition task followed by two stereotype question- naires with different genders and age groups. While recent findings (Krems et al., 2015) suggest that women are biased to see anger in neutral facial expressions posed by females, in our sample both genders assign higher...

  16. Enhanced facial recognition for thermal imagery using polarimetric imaging.

    Science.gov (United States)

    Gurton, Kristan P; Yuffa, Alex J; Videen, Gorden W

    2014-07-01

    We present a series of long-wave-infrared (LWIR) polarimetric-based thermal images of facial profiles in which polarization-state information of the image-forming radiance is retained and displayed. The resultant polarimetric images show enhanced facial features, additional texture, and details that are not present in corresponding conventional thermal imagery. It has been generally thought that conventional thermal imagery (MidIR or LWIR) could not produce the detailed spatial information required for reliable human identification due to the so-called "ghosting" effect often seen in thermal imagery of human subjects. By using polarimetric information, we are able to extract subtle surface features of the human face, thus improving subject identification. Polarimetric image sets considered include the conventional thermal intensity image, S0, the two Stokes images, S1 and S2, and a Stokes image product called the degree-of-linear-polarization image.

  17. Composite Artistry Meets Facial Recognition Technology: Exploring the Use of Facial Recognition Technology to Identify Composite Images

    Science.gov (United States)

    2011-09-01

    composite drawings containing suspects depicted with hats had to be modified to remove the headwear . This headwear caused problems with the...program’s ability to distinguish a facial feature from the headwear . While this information was beneficial for the consumption of composite images for the

  18. Recognition of Facial Expressions in Individuals with Elevated Levels of Depressive Symptoms: An Eye-Movement Study

    Directory of Open Access Journals (Sweden)

    Lingdan Wu

    2012-01-01

    Full Text Available Previous studies consistently reported abnormal recognition of facial expressions in depression. However, it is still not clear whether this abnormality is due to an enhanced or impaired ability to recognize facial expressions, and what underlying cognitive systems are involved. The present study aimed to examine how individuals with elevated levels of depressive symptoms differ from controls on facial expression recognition and to assess attention and information processing using eye tracking. Forty participants (18 with elevated depressive symptoms were instructed to label facial expressions depicting one of seven emotions. Results showed that the high-depression group, in comparison with the low-depression group, recognized facial expressions faster and with comparable accuracy. Furthermore, the high-depression group demonstrated greater leftwards attention bias which has been argued to be an indicator of hyperactivation of right hemisphere during facial expression recognition.

  19. Detecting facial emotion recognition deficits in schizophrenia using dynamic stimuli of varying intensities.

    Science.gov (United States)

    Hargreaves, A; Mothersill, O; Anderson, M; Lawless, S; Corvin, A; Donohoe, G

    2016-10-28

    Deficits in facial emotion recognition have been associated with functional impairments in patients with Schizophrenia (SZ). Whilst a strong ecological argument has been made for the use of both dynamic facial expressions and varied emotion intensities in research, SZ emotion recognition studies to date have primarily used static stimuli of a singular, 100%, intensity of emotion. To address this issue, the present study aimed to investigate accuracy of emotion recognition amongst patients with SZ and healthy subjects using dynamic facial emotion stimuli of varying intensities. To this end an emotion recognition task (ERT) designed by Montagne (2007) was adapted and employed. 47 patients with a DSM-IV diagnosis of SZ and 51 healthy participants were assessed for emotion recognition. Results of the ERT were tested for correlation with performance in areas of cognitive ability typically found to be impaired in psychosis, including IQ, memory, attention and social cognition. Patients were found to perform less well than healthy participants at recognising each of the 6 emotions analysed. Surprisingly, however, groups did not differ in terms of impact of emotion intensity on recognition accuracy; for both groups higher intensity levels predicted greater accuracy, but no significant interaction between diagnosis and emotional intensity was found for any of the 6 emotions. Accuracy of emotion recognition was, however, more strongly correlated with cognition in the patient cohort. Whilst this study demonstrates the feasibility of using ecologically valid dynamic stimuli in the study of emotion recognition accuracy, varying the intensity of the emotion displayed was not demonstrated to impact patients and healthy participants differentially, and thus may not be a necessary variable to include in emotion recognition research. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  20. Facial Emotion Recognition and Expression in Parkinson's Disease: An Emotional Mirror Mechanism?

    Science.gov (United States)

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J; Kilner, James

    2017-01-01

    Parkinson's disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. For emotion recognition, PD reported lower score than HC for Ekman total score (pemotions sub-scores happiness, fear, anger, sadness (pemotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all pemotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). PD patients showed difficulties in recognizing emotional facial

  1. Facial Emotion Recognition and Expression in Parkinson’s Disease: An Emotional Mirror Mechanism?

    Science.gov (United States)

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J.; Kilner, James

    2017-01-01

    Background and aim Parkinson’s disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Methods Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. Results For emotion recognition, PD reported lower score than HC for Ekman total score (pemotions sub-scores happiness, fear, anger, sadness (pemotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all pemotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). Conclusions PD patients

  2. Local Illumination Normalization and Facial Feature Point Selection for Robust Face Recognition

    Directory of Open Access Journals (Sweden)

    Song HAN

    2013-03-01

    Full Text Available Face recognition systems must be robust to the variation of various factors such as facial expression, illumination, head pose and aging. Especially, the robustness against illumination variation is one of the most important problems to be solved for the practical use of face recognition systems. Gabor wavelet is widely used in face detection and recognition because it gives the possibility to simulate the function of human visual system. In this paper, we propose a method for extracting Gabor wavelet features which is stable under the variation of local illumination and show experiment results demonstrating its effectiveness.

  3. Disrupting pre-SMA activity impairs facial happiness recognition: an event-related TMS study.

    Science.gov (United States)

    Rochas, Vincent; Gelmini, Lauriane; Krolak-Salmon, Pierre; Poulet, Emmanuel; Saoud, Mohamed; Brunelin, Jerome; Bediou, Benoit

    2013-07-01

    It has been suggested that the left pre-supplementary motor area (pre-SMA) could be implicated in facial emotion expression and recognition, especially for laughter/happiness. To test this hypothesis, in a single-blind, randomized crossover study, we investigated the impact of transcranial magnetic stimulation (TMS) on performances of 18 healthy participants during a facial emotion recognition task. Using a neuronavigation system based on T1-weighted magnetic resonance imaging of each participant, TMS (5 pulses, 10 Hz) was delivered over the pre-SMA or the vertex (control condition) in an event-related fashion after the presentation of happy, fear, and angry faces. Compared with performances during vertex stimulation, we observed that TMS applied over the left pre-SMA specifically disrupted facial happiness recognition (FHR). No difference was observed between the 2 conditions neither for fear and anger recognition nor for reaction times (RT). Thus, interfering with pre-SMA activity with event-related TMS after stimulus presentation produced a selective impairment in the recognition of happy faces. These findings provide new insights into the functional implication of the pre-SMA in FHR, which may rely on the mirror properties of pre-SMA neurons.

  4. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    Science.gov (United States)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  5. Individual Differences in the Ability to Recognise Facial Identity Are Associated with Social Anxiety

    Science.gov (United States)

    Davis, Joshua M.; McKone, Elinor; Dennett, Hugh; O'Connor, Kirsty B.; O'Kearney, Richard; Palermo, Romina

    2011-01-01

    Previous research has been concerned with the relationship between social anxiety and the recognition of face expression but the question of whether there is a relationship between social anxiety and the recognition of face identity has been neglected. Here, we report the first evidence that social anxiety is associated with recognition of face identity, across the population range of individual differences in recognition abilities. Results showed poorer face identity recognition (on the Cambridge Face Memory Test) was correlated with a small but significant increase in social anxiety (Social Interaction Anxiety Scale) but not general anxiety (State-Trait Anxiety Inventory). The correlation was also independent of general visual memory (Cambridge Car Memory Test) and IQ. Theoretically, the correlation could arise because correct identification of people, typically achieved via faces, is important for successful social interactions, extending evidence that individuals with clinical-level deficits in face identity recognition (prosopagnosia) often report social stress due to their inability to recognise others. Equally, the relationship could arise if social anxiety causes reduced exposure or attention to people's faces, and thus to poor development of face recognition mechanisms. PMID:22194916

  6. Individual differences in the ability to recognise facial identity are associated with social anxiety.

    Directory of Open Access Journals (Sweden)

    Joshua M Davis

    Full Text Available Previous research has been concerned with the relationship between social anxiety and the recognition of face expression but the question of whether there is a relationship between social anxiety and the recognition of face identity has been neglected. Here, we report the first evidence that social anxiety is associated with recognition of face identity, across the population range of individual differences in recognition abilities. Results showed poorer face identity recognition (on the Cambridge Face Memory Test was correlated with a small but significant increase in social anxiety (Social Interaction Anxiety Scale but not general anxiety (State-Trait Anxiety Inventory. The correlation was also independent of general visual memory (Cambridge Car Memory Test and IQ. Theoretically, the correlation could arise because correct identification of people, typically achieved via faces, is important for successful social interactions, extending evidence that individuals with clinical-level deficits in face identity recognition (prosopagnosia often report social stress due to their inability to recognise others. Equally, the relationship could arise if social anxiety causes reduced exposure or attention to people's faces, and thus to poor development of face recognition mechanisms.

  7. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    Science.gov (United States)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  8. Effects of Orientation on Recognition of Facial Affect

    Science.gov (United States)

    Cohen, M. M.; Mealey, J. B.; Hargens, Alan R. (Technical Monitor)

    1997-01-01

    The ability to discriminate facial features is often degraded when the orientation of the face and/or the observer is altered. Previous studies have shown that gross distortions of facial features can go unrecognized when the image of the face is inverted, as exemplified by the 'Margaret Thatcher' effect. This study examines how quickly erect and supine observers can distinguish between smiling and frowning faces that are presented at various orientations. The effects of orientation are of particular interest in space, where astronauts frequently view one another in orientations other than the upright. Sixteen observers viewed individual facial images of six people on a computer screen; on a given trial, the image was either smiling or frowning. Each image was viewed when it was erect and when it was rotated (rolled) by 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees and 270 degrees about the line of sight. The observers were required to respond as rapidly and accurately as possible to identify if the face presented was smiling or frowning. Measures of reaction time were obtained when the observers were both upright and supine. Analyses of variance revealed that mean reaction time, which increased with stimulus rotation (F=18.54, df 7/15, p (is less than) 0.001), was 22% longer when the faces were inverted than when they were erect, but that the orientation of the observer had no significant effect on reaction time (F=1.07, df 1/15, p (is greater than) .30). These data strongly suggest that the orientation of the image of a face on the observer's retina, but not its orientation with respect to gravity, is important in identifying the expression on the face.

  9. Shy Children Are Less Sensitive to Some Cues to Facial Recognition

    Science.gov (United States)

    Brunet, Paul M.; Mondloch, Catherine J.; Schmidt, Louis A.

    2010-01-01

    Temperamental shyness in children is characterized by avoidance of faces and eye contact, beginning in infancy. We conducted two studies to determine whether temperamental shyness was associated with deficits in sensitivity to some cues to facial identity. In Study 1, 40 typically developing 10-year-old children made same/different judgments about…

  10. Shy Children Are Less Sensitive to Some Cues to Facial Recognition

    Science.gov (United States)

    Brunet, Paul M.; Mondloch, Catherine J.; Schmidt, Louis A.

    2010-01-01

    Temperamental shyness in children is characterized by avoidance of faces and eye contact, beginning in infancy. We conducted two studies to determine whether temperamental shyness was associated with deficits in sensitivity to some cues to facial identity. In Study 1, 40 typically developing 10-year-old children made same/different judgments about…

  11. Recognition of the Cornelia de Lange syndrome phenotype with facial dysmorphology novel analysis.

    Science.gov (United States)

    Basel-Vanagaite, L; Wolf, L; Orin, M; Larizza, L; Gervasini, C; Krantz, I D; Deardoff, M A

    2016-05-01

    Facial analysis systems are becoming available to healthcare providers to aid in the recognition of dysmorphic phenotypes associated with a multitude of genetic syndromes. These technologies automatically detect facial points and extract various measurements from images to recognize dysmorphic features and evaluate similarities to known facial patterns (gestalts). To evaluate such systems' usefulness for supporting the clinical practice of healthcare professionals, the recognition accuracy of the Cornelia de Lange syndrome (CdLS) phenotype was examined with FDNA's automated facial dysmorphology novel analysis (FDNA) technology. In the first experiment, 2D facial images of CdLS patients with either an NIPBL or SMC1A gene mutation as well as non-CdLS patients which were assessed by dysmorphologists in a previous study were evaluated by the FDNA technology; the average detection rate of experts was 77% while the system's detection rate was 87%. In the second study, when a new set of NIPBL, SMC1A and non-CdLS patient photos was evaluated, the detection rate increased to 94%. The results from both studies indicated that the system's detection rate was comparable to that of dysmorphology experts. Therefore, utilizing such technologies may be a useful tool in a clinical setting. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. A Micro-GA Embedded PSO Feature Selection Approach to Intelligent Facial Emotion Recognition.

    Science.gov (United States)

    Mistry, Kamlesh; Zhang, Li; Neoh, Siew Chin; Lim, Chee Peng; Fielding, Ben

    2017-06-01

    This paper proposes a facial expression recognition system using evolutionary particle swarm optimization (PSO)-based feature optimization. The system first employs modified local binary patterns, which conduct horizontal and vertical neighborhood pixel comparison, to generate a discriminative initial facial representation. Then, a PSO variant embedded with the concept of a micro genetic algorithm (mGA), called mGA-embedded PSO, is proposed to perform feature optimization. It incorporates a nonreplaceable memory, a small-population secondary swarm, a new velocity updating strategy, a subdimension-based in-depth local facial feature search, and a cooperation of local exploitation and global exploration search mechanism to mitigate the premature convergence problem of conventional PSO. Multiple classifiers are used for recognizing seven facial expressions. Based on a comprehensive study using within- and cross-domain images from the extended Cohn Kanade and MMI benchmark databases, respectively, the empirical results indicate that our proposed system outperforms other state-of-the-art PSO variants, conventional PSO, classical GA, and other related facial expression recognition models reported in the literature by a significant margin.

  13. Surface Electromyography-Based Facial Expression Recognition in Bi-Polar Configuration

    Directory of Open Access Journals (Sweden)

    Mahyar Hamedi

    2011-01-01

    Full Text Available Problem statement: Facial expression recognition has been improved recently and it has become a significant issue in diagnostic and medical fields, particularly in the areas of assistive technology and rehabilitation. Apart from their usefulness, there are some problems in their applications like peripheral conditions, lightening, contrast and quality of video and images. Approach: Facial Action Coding System (FACS and some other methods based on images or videos were applied. This study proposed two methods for recognizing 8 different facial expressions such as natural (rest, happiness in three conditions, anger, rage, gesturing ‘a’ like in apple word and gesturing no by pulling up the eyebrows based on Three-channels in Bi-polar configuration by SEMG. Raw signals were processed in three main steps (filtration, feature extraction and active features selection sequentially. Processed data was fed into Support Vector Machine and Fuzzy C-Means classifiers for being classified into 8 facial expression groups. Results: 91.8 and 80.4% recognition ratio had been achieved for FCM and SVM respectively. Conclusion: The confirmed enough accuracy and power in this field of study and FCM showed its better ability and performance in comparison with SVM. It’s expected that in near future, new approaches in the frequency bandwidth of each facial gesture will provide better results.

  14. The recognition of facial expressions of emotion in Alzheimer's disease: a review of findings.

    Science.gov (United States)

    McLellan, Tracey; Johnston, Lucy; Dalrymple-Alford, John; Porter, Richard

    2008-10-01

    To provide a selective review of the literature on the recognition of facial expressions of emotion in Alzheimer's disease (AD), to evaluate whether these patients show variation in their ability to recognise different emotions and whether any such impairments are instead because of a general decline in cognition. A narrative review based on relevant articles identified from PubMed and PsycInfo searches from 1987 to 2007 using keywords 'Alzheimer's', 'facial expression recognition', 'dementia' and 'emotion processing'. Although the literature is as yet limited, with several methodological inconsistencies, AD patients show poorer recognition of facial expressions, with particular difficulty with sad expressions. It is unclear whether poorer performance reflects the general cognitive decline and/or verbal or spatial deficits associated with AD or whether the deficits reflect specific neuropathology. This under-represented field of study may help to extend our understanding of social functioning in AD. Future work requires more detailed analyses of ancillary cognitive measures, more ecologically valid facial displays of emotion and a reference situation that more closely approximates an actual social interaction.

  15. Impaired recognition of prosody and subtle emotional facial expressions in Parkinson's disease.

    Science.gov (United States)

    Buxton, Sharon L; MacDonald, Lorraine; Tippett, Lynette J

    2013-04-01

    Accurately recognizing the emotional states of others is crucial for successful social interactions and social relationships. Individuals with Parkinson's disease (PD) have shown deficits in emotional recognition abilities although findings have been inconsistent. This study examined recognition of emotions from prosody and from facial emotional expressions with three levels of subtlety, in 30 individuals with PD (without dementia) and 30 control participants. The PD group were impaired on the prosody task, with no differential impairments in specific emotions. PD participants were also impaired at recognizing facial expressions of emotion, with a significant association between how well they could recognize emotions in the two modalities, even after controlling for disease severity. When recognizing facial expressions, the PD group had no difficulty identifying prototypical Ekman and Friesen (1976) emotional faces, but were poorer than controls at recognizing the moderate and difficult levels of subtle expressions. They were differentially impaired at recognizing moderately subtle expressions of disgust and sad expressions at the difficult level. Notably, however, they were impaired at recognizing happy expressions at both levels of subtlety. Furthermore how well PD participants identified happy expressions conveyed by either face or voice was strongly related to accuracy in the other modality. This suggests dysfunction of overlapping components of the circuitry processing happy expressions in PD. This study demonstrates the usefulness of including subtle expressions of emotion, likely to be encountered in everyday life, when assessing recognition of facial expressions.

  16. LWIR polarimetry for enhanced facial recognition in thermal imagery

    Science.gov (United States)

    Gurton, Kristan P.; Yuffa, Alex J.; Videen, Gorden

    2014-05-01

    We present a series of long-wave-infrared (LWIR) polarimetric-based thermal images of facial profiles in which polarization-state information of the image forming radiance is retained and displayed. The resultant polarimetric images show enhanced facial features, additional texture, and details that are not present in the corresponding conventional thermal imagery. It has been generally thought that conventional thermal imagery (MidiR or LWIR) could not produce the detailed spatial information required for reliable human identification due to the so-called "ghosting" effect often seen in thermal imagery of human subjects. By using polarimetric information, we are able to extract subtle surface features of the human face, thus improving subject identification. The considered polarimetric image sets include the conventional thermal intensity image, S0 , the two Stokes images, S1 and S2, and a Stokes image product called the degree-of-linear-polarization (DoLP) image. Finally, Stokes imagery is combined with Fresnel relations to extract additional 3D surface information.

  17. Human facial neural activities and gesture recognition for machine-interfacing applications.

    Science.gov (United States)

    Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P

    2011-01-01

    The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.

  18. Newborns' Face Recognition: Role of Inner and Outer Facial Features

    Science.gov (United States)

    Turati, Chiara; Macchi Cassia, Viola; Simion, Francesca; Leo, Irene

    2006-01-01

    Existing data indicate that newborns are able to recognize individual faces, but little is known about what perceptual cues drive this ability. The current study showed that either the inner or outer features of the face can act as sufficient cues for newborns' face recognition (Experiment 1), but the outer part of the face enjoys an advantage…

  19. A group of facial normal descriptors for recognizing 3D identical twins

    KAUST Repository

    Li, Huibin

    2012-09-01

    In this paper, to characterize and distinguish identical twins, three popular texture descriptors: i.e. local binary patterns (LBPs), gabor filters (GFs) and local gabor binary patterns (LGBPs) are employed to encode the normal components (x, y and z) of the 3D facial surfaces of identical twins respectively. A group of facial normal descriptors are thus achieved, including Normal Local Binary Patterns descriptor (N-LBPs), Normal Gabor Filters descriptor (N-GFs) and Normal Local Gabor Binary Patterns descriptor (N-LGBPs). All these normal encoding based descriptors are further fed into sparse representation classifier (SRC) for identification. Experimental results on the 3D TEC database demonstrate that these proposed normal encoding based descriptors are very discriminative and efficient, achieving comparable performance to the best of state-of-the-art algorithms. © 2012 IEEE.

  20. Facial cosmetics have little effect on attractiveness judgments compared with identity.

    Science.gov (United States)

    Jones, Alex L; Kramer, S S

    2015-01-01

    The vast majority of women in modern societies use facial cosmetics, which modify facial cues to attractiveness. However, the size of this increase remains unclear--how much more attractive are individuals after an application of cosmetics? Here, we utilised a 'new statistics' approach, calculating the effect size of cosmetics on attractiveness using a within-subjects design, and compared this with the effect size due to identity--that is, the inherent differences in attractiveness between people. Women were photographed with and without cosmetics, and these images were rated for attractiveness by a second group of participants. The proportion of variance in attractiveness explained by identity was much greater than the variance within models due to cosmetics. This result was unchanged after statistically controlling for the perceived amount of cosmetics that each model used. Although cosmetics increase attractiveness, the effect is small, and the benefits of cosmetics may be inflated in everyday thinking.

  1. Recognition of facial expressions by alcoholic patients: a systematic literature review.

    Science.gov (United States)

    Donadon, Mariana Fortunata; Osório, Flávia de Lima

    2014-01-01

    Alcohol abuse and dependence can cause a wide variety of cognitive, psychomotor, and visual-spatial deficits. It is questionable whether this condition is associated with impairments in the recognition of affective and/or emotional information. Such impairments may promote deficits in social cognition and, consequently, in the adaptation and interaction of alcohol abusers with their social environment. The aim of this systematic review was to systematize the literature on alcoholics' recognition of basic facial expressions in terms of the following outcome variables: accuracy, emotional intensity, and latency time. A systematic literature search in the PsycINFO, PubMed, and SciELO electronic databases, with no restrictions regarding publication year, was employed as the study methodology. The findings of some studies indicate that alcoholics have greater impairment in facial expression recognition tasks, while others could not differentiate the clinical group from controls. However, there was a trend toward greater deficits in alcoholics. Alcoholics displayed less accuracy in recognition of sadness and disgust and required greater emotional intensity to judge facial expressions corresponding to fear and anger. The current study was only able to identify trends in the chosen outcome variables. Future studies that aim to provide more precise evidence for the potential influence of alcohol on social cognition are needed.

  2. Influence of gender in the recognition of basic facial expressions: A critical literature review.

    Science.gov (United States)

    Forni-Santos, Larissa; Osório, Flávia L

    2015-09-22

    To conduct a systematic literature review about the influence of gender on the recognition of facial expressions of six basic emotions. We made a systematic search with the search terms (face OR facial) AND (processing OR recognition OR perception) AND (emotional OR emotion) AND (gender or sex) in PubMed, PsycINFO, LILACS, and SciELO electronic databases for articles assessing outcomes related to response accuracy and latency and emotional intensity. The articles selection was performed according to parameters set by COCHRANE. The reference lists of the articles found through the database search were checked for additional references of interest. In respect to accuracy, women tend to perform better than men when all emotions are considered as a set. Regarding specific emotions, there seems to be no gender-related differences in the recognition of happiness, whereas results are quite heterogeneous in respect to the remaining emotions, especially sadness, anger, and disgust. Fewer articles dealt with the parameters of response latency and emotional intensity, which hinders the generalization of their findings, especially in the face of their methodological differences. The analysis of the studies conducted to date do not allow for definite conclusions concerning the role of the observer's gender in the recognition of facial emotion, mostly because of the absence of standardized methods of investigation.

  3. Neurocognition and symptoms identify links between facial recognition and emotion processing in schizophrenia: meta-analytic findings.

    Science.gov (United States)

    Ventura, Joseph; Wood, Rachel C; Jimenez, Amy M; Hellemann, Gerhard S

    2013-12-01

    In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? A meta-analysis of 102 studies (combined n=4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r=.51). In addition, the relationship between FR and EP through voice prosody (r=.58) is as strong as the relationship between FR and EP based on facial stimuli (r=.53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality - facial stimuli and voice prosody. The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. © 2013 Elsevier B.V. All rights reserved.

  4. Joint recognition-expression impairment of facial emotions in Huntington's disease despite intact understanding of feelings.

    Science.gov (United States)

    Trinkler, Iris; Cleret de Langavant, Laurent; Bachoud-Lévi, Anne-Catherine

    2013-02-01

    Patients with Huntington's disease (HD), a neurodegenerative disorder that causes major motor impairments, also show cognitive and emotional deficits. While their deficit in recognising emotions has been explored in depth, little is known about their ability to express emotions and understand their feelings. If these faculties were impaired, patients might not only mis-read emotion expressions in others but their own emotions might be mis-interpreted by others as well, or thirdly, they might have difficulties understanding and describing their feelings. We compared the performance of recognition and expression of facial emotions in 13 HD patients with mild motor impairments but without significant bucco-facial abnormalities, and 13 controls matched for age and education. Emotion recognition was investigated in a forced-choice recognition test (FCR), and emotion expression by filming participants while they mimed the six basic emotional facial expressions (anger, disgust, fear, surprise, sadness and joy) to the experimenter. The films were then segmented into 60 stimuli per participant and four external raters performed a FCR on this material. Further, we tested understanding of feelings in self (alexithymia) and others (empathy) using questionnaires. Both recognition and expression were impaired across different emotions in HD compared to controls and recognition and expression scores were correlated. By contrast, alexithymia and empathy scores were very similar in HD and controls. This might suggest that emotion deficits in HD might be tied to the expression itself. Because similar emotion recognition-expression deficits are also found in Parkinson's Disease and vascular lesions of the striatum, our results further confirm the importance of the striatum for emotion recognition and expression, while access to the meaning of feelings relies on a different brain network, and is spared in HD.

  5. Recognition of facial expressions by alcoholic patients: a systematic literature review

    Directory of Open Access Journals (Sweden)

    Donadon MF

    2014-09-01

    Full Text Available Mariana Fortunata Donadon,1,2 Flávia de Lima Osório1,3,41Department of Neurosciences and Behavior, Medical School of Ribeirão Preto, University of São Paulo, 2Coordination for the Improvement of Higher Level Personnel-CAPS, 3Technology Institute for Translational Medicine, Ribeirão Preto, São Paulo, Brazil; 4Agency of São Paulo Research Foundation, São Paulo, BrazilBackground: Alcohol abuse and dependence can cause a wide variety of cognitive, psychomotor, and visual-spatial deficits. It is questionable whether this condition is associated with impairments in the recognition of affective and/or emotional information. Such impairments may promote deficits in social cognition and, consequently, in the adaptation and interaction of alcohol abusers with their social environment. The aim of this systematic review was to systematize the literature on alcoholics’ recognition of basic facial expressions in terms of the following outcome variables: accuracy, emotional intensity, and latency time.Methods: A systematic literature search in the PsycINFO, PubMed, and SciELO electronic databases, with no restrictions regarding publication year, was employed as the study methodology.Results: The findings of some studies indicate that alcoholics have greater impairment in facial expression recognition tasks, while others could not differentiate the clinical group from controls. However, there was a trend toward greater deficits in alcoholics. Alcoholics displayed less accuracy in recognition of sadness and disgust and required greater emotional intensity to judge facial expressions corresponding to fear and anger.Conclusion: The current study was only able to identify trends in the chosen outcome variables. Future studies that aim to provide more precise evidence for the potential influence of alcohol on social cognition are needed.Keywords: alcoholism, face, emotional recognition, facial expression, systematic review

  6. Social perception and aging: The relationship between aging and the perception of subtle changes in facial happiness and identity.

    Science.gov (United States)

    Yang, Tao; Penton, Tegan; Köybaşı, Şerife Leman; Banissy, Michael J

    2017-09-01

    Previous findings suggest that older adults show impairments in the social perception of faces, including the perception of emotion and facial identity. The majority of this work has tended to examine performance on tasks involving young adult faces and prototypical emotions. While useful, this can influence performance differences between groups due to perceptual biases and limitations on task performance. Here we sought to examine how typical aging is associated with the perception of subtle changes in facial happiness and facial identity in older adult faces. We developed novel tasks that permitted the ability to assess facial happiness, facial identity, and non-social perception (object perception) across similar task parameters. We observe that aging is linked with declines in the ability to make fine-grained judgements in the perception of facial happiness and facial identity (from older adult faces), but not for non-social (object) perception. This pattern of results is discussed in relation to mechanisms that may contribute to declines in facial perceptual processing in older adulthood. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Effects of exposure to facial expression variation in face learning and recognition.

    Science.gov (United States)

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.

  8. Recognition of facial emotion and affective prosody in children with ASD (+ADHD) and their unaffected siblings.

    Science.gov (United States)

    Oerlemans, Anoek M; van der Meer, Jolanda M J; van Steijn, Daphne J; de Ruiter, Saskia W; de Bruijn, Yvette G E; de Sonneville, Leo M J; Buitelaar, Jan K; Rommelse, Nanda N J

    2014-05-01

    Autism is a highly heritable and clinically heterogeneous neuropsychiatric disorder that frequently co-occurs with other psychopathologies, such as attention-deficit/hyperactivity disorder (ADHD). An approach to parse heterogeneity is by forming more homogeneous subgroups of autism spectrum disorder (ASD) patients based on their underlying, heritable cognitive vulnerabilities (endophenotypes). Emotion recognition is a likely endophenotypic candidate for ASD and possibly for ADHD. Therefore, this study aimed to examine whether emotion recognition is a viable endophenotypic candidate for ASD and to assess the impact of comorbid ADHD in this context. A total of 90 children with ASD (43 with and 47 without ADHD), 79 ASD unaffected siblings, and 139 controls aged 6-13 years, were included to test recognition of facial emotion and affective prosody. Our results revealed that the recognition of both facial emotion and affective prosody was impaired in children with ASD and aggravated by the presence of ADHD. The latter could only be partly explained by typical ADHD cognitive deficits, such as inhibitory and attentional problems. The performance of unaffected siblings could overall be considered at an intermediate level, performing somewhat worse than the controls and better than the ASD probands. Our findings suggest that emotion recognition might be a viable endophenotype in ASD and a fruitful target in future family studies of the genetic contribution to ASD and comorbid ADHD. Furthermore, our results suggest that children with comorbid ASD and ADHD are at highest risk for emotion recognition problems.

  9. The role of recognition and interest in physics identity development

    Science.gov (United States)

    Lock, Robynne

    2016-03-01

    While the number of students earning bachelor's degrees in physics has increased in recent years, this number has only recently surpassed the peak value of the 1960s. Additionally, the percentage of women earning bachelor's degrees in physics has stagnated for the past 10 years and may even be declining. We use a physics identity framework consisting of three dimensions to understand how students make their initial career decisions at the end of high school and the beginning of college. The three dimensions consist of recognition (perception that teachers, parents, and peers see the student as a ``physics person''), interest (desire to learn more about physics), and performance/competence (perception of abilities to complete physics related tasks and to understand physics). Using data from the Sustainability and Gender in Engineering survey administered to a nationally representative sample of college students, we built a regression model to determine which identity dimensions have the largest effect on physics career choice and a structural equation model to understand how the identity dimensions are related. Additionally, we used regression models to identify teaching strategies that predict each identity dimension.

  10. A Classifier Model based on the Features Quantitative Analysis for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Amir Jamshidnezhad

    2011-01-01

    Full Text Available In recent decades computer technology has considerable developed in use of intelligent systems for classification. The development of HCI systems is highly depended on accurate understanding of emotions. However, facial expressions are difficult to classify by a mathematical models because of natural quality. In this paper, quantitative analysis is used in order to find the most effective features movements between the selected facial feature points. Therefore, the features are extracted not only based on the psychological studies, but also based on the quantitative methods to arise the accuracy of recognitions. Also in this model, fuzzy logic and genetic algorithm are used to classify facial expressions. Genetic algorithm is an exclusive attribute of proposed model which is used for tuning membership functions and increasing the accuracy.

  11. Spatiotemporal dynamics of similarity-based neural representations of facial identity.

    Science.gov (United States)

    Vida, Mark D; Nestor, Adrian; Plaut, David C; Behrmann, Marlene

    2017-01-10

    Humans' remarkable ability to quickly and accurately discriminate among thousands of highly similar complex objects demands rapid and precise neural computations. To elucidate the process by which this is achieved, we used magnetoencephalography to measure spatiotemporal patterns of neural activity with high temporal resolution during visual discrimination among a large and carefully controlled set of faces. We also compared these neural data to lower level "image-based" and higher level "identity-based" model-based representations of our stimuli and to behavioral similarity judgments of our stimuli. Between ∼50 and 400 ms after stimulus onset, face-selective sources in right lateral occipital cortex and right fusiform gyrus and sources in a control region (left V1) yielded successful classification of facial identity. In all regions, early responses were more similar to the image-based representation than to the identity-based representation. In the face-selective regions only, responses were more similar to the identity-based representation at several time points after 200 ms. Behavioral responses were more similar to the identity-based representation than to the image-based representation, and their structure was predicted by responses in the face-selective regions. These results provide a temporally precise description of the transformation from low- to high-level representations of facial identity in human face-selective cortex and demonstrate that face-selective cortical regions represent multiple distinct types of information about face identity at different times over the first 500 ms after stimulus onset. These results have important implications for understanding the rapid emergence of fine-grained, high-level representations of object identity, a computation essential to human visual expertise.

  12. The role of spatial frequency information in the recognition of facial expressions of pain.

    Science.gov (United States)

    Wang, Shan; Eccleston, Christopher; Keogh, Edmund

    2015-09-01

    Being able to detect pain from facial expressions is critical for pain communication. Alongside identifying the specific facial codes used in pain recognition, there are other types of more basic perceptual features, such as spatial frequency (SF), which refers to the amount of detail in a visual display. Low SF carries coarse information, which can be seen from a distance, and high SF carries fine-detailed information that can only be perceived when viewed close up. As this type of basic information has not been considered in the recognition of pain, we therefore investigated the role of low-SF and high-SF information in the decoding of facial expressions of pain. Sixty-four pain-free adults completed 2 independent tasks: a multiple expression identification task of pain and core emotional expressions and a dual expression "either-or" task (pain vs fear, pain vs happiness). Although both low-SF and high-SF information make the recognition of pain expressions possible, low-SF information seemed to play a more prominent role. This general low-SF bias would seem an advantageous way of potential threat detection, as facial displays will be degraded if viewed from a distance or in peripheral vision. One exception was found, however, in the "pain-fear" task, where responses were not affected by SF type. Together, this not only indicates a flexible role for SF information that depends on task parameters (goal context) but also suggests that in challenging visual conditions, we perceive an overall affective quality of pain expressions rather than detailed facial features.

  13. Misreading the facial signs: specific impairments and error patterns in recognition of facial emotions with negative valence in borderline personality disorder.

    Science.gov (United States)

    Unoka, Zsolt; Fogd, Dóra; Füzy, Melinda; Csukly, Gábor

    2011-10-30

    Patients with borderline personality disorder (BPD) exhibit impairment in labeling of facial emotional expressions. However, it is not clear whether these deficits affect the whole domain of basic emotions, are valence-specific, or specific to individual emotions. Whether BPD patients' errors in a facial emotion recognition task create a specific pattern also remains to be elucidated. Our study tested two hypotheses: first, we hypothesized, that the emotion perception impairment in borderline personality disorder is specific to the negative emotion domain. Second, we hypothesized, that BPD patients would show error patterns in a facial emotion recognition task more commonly and more systematically than healthy comparison subjects. Participants comprised 33 inpatients with BPD and 32 matched healthy control subjects who performed a computerized version of the Ekman 60 Faces test. The indices of emotion recognition and the direction of errors were processed in separate analyses. Clinical symptoms and personality functioning were assessed using the Symptom Checklist-90-Revised and the Young Schema Questionnaire Long Form. Results showed that patients with BPD were less accurate than control participants in emotion recognition, in particular, in the discrimination of negative emotions, while they were not impaired in the recognition of happy facial expressions. In addition, patients over-attributed disgust and surprise and under-attributed fear to the facial expressions relative to controls. These findings suggest the importance of carefully considering error patterns, besides measuring recognition accuracy, especially among emotions with negative affective valence, when assessing facial affect recognition in BPD. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. Emotional Processing, Recognition, Empathy and Evoked Facial Expression in Eating Disorders: An Experimental Study to Map Deficits in Social Cognition

    National Research Council Canada - National Science Library

    Cardi, Valentina; Corfield, Freya; Leppanen, Jenni; Rhind, Charlotte; Deriziotis, Stephanie; Hadjimichalis, Alexandra; Hibbs, Rebecca; Micali, Nadia; Treasure, Janet

    2015-01-01

    .... The aim of this study is to examine distinct processes of social-cognition in this patient group, including attentional processing and recognition, empathic reaction and evoked facial expression...

  15. Reaction Time of Facial Affect Recognition in Asperger's Disorder for Cartoon and Real, Static and Moving Faces

    Science.gov (United States)

    Miyahara, Motohide; Bray, Anne; Tsujii, Masatsugu; Fujita, Chikako; Sugiyama, Toshiro

    2007-01-01

    This study used a choice reaction-time paradigm to test the perceived impairment of facial affect recognition in Asperger's disorder. Twenty teenagers with Asperger's disorder and 20 controls were compared with respect to the latency and accuracy of response to happy or disgusted facial expressions, presented in cartoon or real images and in…

  16. Reaction Time of Facial Affect Recognition in Asperger's Disorder for Cartoon and Real, Static and Moving Faces

    Science.gov (United States)

    Miyahara, Motohide; Bray, Anne; Tsujii, Masatsugu; Fujita, Chikako; Sugiyama, Toshiro

    2007-01-01

    This study used a choice reaction-time paradigm to test the perceived impairment of facial affect recognition in Asperger's disorder. Twenty teenagers with Asperger's disorder and 20 controls were compared with respect to the latency and accuracy of response to happy or disgusted facial expressions, presented in cartoon or real images and in…

  17. The Effect of Gender and Age Differences on the Recognition of Emotions from Facial Expressions

    DEFF Research Database (Denmark)

    Schneevogt, Daniela; Paggio, Patrizia

    2016-01-01

    Recent studies have demonstrated gender and cultural differences in the recognition of emotions in facial expressions. However, most studies were conducted on American subjects. In this pa- per, we explore the generalizability of several findings to a non-American culture in the form of Danish...... subjects. We conduct an emotion recognition task followed by two stereotype question- naires with different genders and age groups. While recent findings (Krems et al., 2015) suggest that women are biased to see anger in neutral facial expressions posed by females, in our sample both genders assign higher...... ratings of anger to all emotions expressed by females. Furthermore, we demonstrate an effect of gender on the fear-surprise-confusion observed by Tomkins and McCarter (1964); females overpredict fear, while males overpredict surprise....

  18. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.

    Science.gov (United States)

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-06-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development.

  19. Test battery for measuring the perception and recognition of facial expressions of emotion

    Science.gov (United States)

    Wilhelm, Oliver; Hildebrandt, Andrea; Manske, Karsten; Schacht, Annekathrin; Sommer, Werner

    2014-01-01

    Despite the importance of perceiving and recognizing facial expressions in everyday life, there is no comprehensive test battery for the multivariate assessment of these abilities. As a first step toward such a compilation, we present 16 tasks that measure the perception and recognition of facial emotion expressions, and data illustrating each task's difficulty and reliability. The scoring of these tasks focuses on either the speed or accuracy of performance. A sample of 269 healthy young adults completed all tasks. In general, accuracy and reaction time measures for emotion-general scores showed acceptable and high estimates of internal consistency and factor reliability. Emotion-specific scores yielded lower reliabilities, yet high enough to encourage further studies with such measures. Analyses of task difficulty revealed that all tasks are suitable for measuring emotion perception and emotion recognition related abilities in normal populations. PMID:24860528

  20. Batch metadata assignment to archival photograph collections using facial recognition software

    Directory of Open Access Journals (Sweden)

    Kyle Banerjee

    2013-07-01

    Full Text Available Useful metadata is essential to giving individual meaning and value within the context of a greater image collection as well as making them more discoverable. However, often little information is available about the photos themselves, so adding consistent metadata to large collections of digital and digitized photographs is a time consuming process requiring highly experienced staff. By using facial recognition software, staff can identify individuals more quickly and reliably. Knowledge of individuals in photos helps staff determine when and where photos are taken and also improves understanding of the subject matter. This article demonstrates simple techniques for using facial recognition software and command line tools to assign, modify, and read metadata for large archival photograph collections.

  1. Fusion-based approach for long-range night-time facial recognition

    Science.gov (United States)

    Martin, Robert B.; Sluch, Mikhail; Kafka, Kristopher M.; Dolby, Andrew; Ice, Robert V.; Lemoff, Brian E.

    2014-06-01

    Long range identification using facial recognition is being pursued as a valuable surveillance tool. The capability to perform this task covertly and in total darkness greatly enhances the operators' ability to maintain a large distance between themselves and a possible hostile target. An active-SWIR video imaging system has been developed to produce high-quality long-range night/day facial imagery for this purpose. Most facial recognition techniques match a single input probe image against a gallery of possible match candidates. When resolution, wavelength, and uncontrolled conditions reduce the accuracy of single-image matching, multiple probe images of the same subject can be matched to the watch-list and the results fused to increase accuracy. If multiple probe images are acquired from video over a short period of time, the high correlation between the images tends to produce similar matching results, which should reduce the benefit of the fusion. In contrast, fusing matching results from multiple images acquired over a longer period of time, where the images show more variability, should produce a more accurate result. In general, image variables could include pose angle, field-of-view, lighting condition, facial expression, target to sensor distance, contrast, and image background. Long-range short wave infrared (SWIR) video was used to generate probe image datasets containing different levels of variability. Face matching results for each image in each dataset were fused, and the results compared.

  2. Facial, vocal and musical emotion recognition is altered in paranoid schizophrenic patients.

    Science.gov (United States)

    Weisgerber, Anne; Vermeulen, Nicolas; Peretz, Isabelle; Samson, Séverine; Philippot, Pierre; Maurage, Pierre; De Graeuwe D'Aoust, Catherine; De Jaegere, Aline; Delatte, Benoît; Gillain, Benoît; De Longueville, Xavier; Constant, Eric

    2015-09-30

    Disturbed processing of emotional faces and voices is typically observed in schizophrenia. This deficit leads to impaired social cognition and interactions. In this study, we investigated whether impaired processing of emotions also affects musical stimuli, which are widely present in daily life and known for their emotional impact. Thirty schizophrenic patients and 30 matched healthy controls evaluated the emotional content of musical, vocal and facial stimuli. Schizophrenic patients are less accurate than healthy controls in recognizing emotion in music, voices and faces. Our results confirm impaired recognition of emotion in voice and face stimuli in schizophrenic patients and extend this observation to the recognition of emotion in musical stimuli.

  3. Impaired recognition of musical emotions and facial expressions following anteromedial temporal lobe excision.

    Science.gov (United States)

    Gosselin, Nathalie; Peretz, Isabelle; Hasboun, Dominique; Baulac, Michel; Samson, Séverine

    2011-10-01

    We have shown that an anteromedial temporal lobe resection can impair the recognition of scary music in a prior study (Gosselin et al., 2005). In other studies (Adolphs et al., 2001; Anderson et al., 2000), similar results have been obtained with fearful facial expressions. These findings suggest that scary music and fearful faces may be processed by common cerebral structures. To assess this possibility, we tested patients with unilateral anteromedial temporal excision and normal controls in two emotional tasks. In the task of identifying musical emotion, stimuli evoked either fear, peacefulness, happiness or sadness. Participants were asked to rate to what extent each stimulus expressed these four emotions on 10-point scales. The task of facial emotion included morphed stimuli whose expression varied from faint to more pronounced and evoked fear, happiness, sadness, surprise, anger or disgust. Participants were requested to select the appropriate label. Most patients were found to be impaired in the recognition of both scary music and fearful faces. Furthermore, the results in both tasks were correlated, suggesting a multimodal representation of fear within the amygdala. However, inspection of individual results showed that recognition of fearful faces can be preserved whereas recognition of scary music can be impaired. Such a dissociation found in two cases suggests that fear recognition in faces and in music does not necessarily involve exactly the same cerebral networks and this hypothesis is discussed in light of the current literature.

  4. Visual Scanning Patterns and Executive Function in Relation to Facial Emotion Recognition in Aging

    Science.gov (United States)

    Circelli, Karishma S.; Clark, Uraina S.; Cronin-Golomb, Alice

    2012-01-01

    Objective The ability to perceive facial emotion varies with age. Relative to younger adults (YA), older adults (OA) are less accurate at identifying fear, anger, and sadness, and more accurate at identifying disgust. Because different emotions are conveyed by different parts of the face, changes in visual scanning patterns may account for age-related variability. We investigated the relation between scanning patterns and recognition of facial emotions. Additionally, as frontal-lobe changes with age may affect scanning patterns and emotion recognition, we examined correlations between scanning parameters and performance on executive function tests. Methods We recorded eye movements from 16 OA (mean age 68.9) and 16 YA (mean age 19.2) while they categorized facial expressions and non-face control images (landscapes), and administered standard tests of executive function. Results OA were less accurate than YA at identifying fear (precognition of sad expressions and with scanning patterns for fearful, sad, and surprised expressions. Conclusion We report significant age-related differences in visual scanning that are specific to faces. The observed relation between scanning patterns and executive function supports the hypothesis that frontal-lobe changes with age may underlie some changes in emotion recognition. PMID:22616800

  5. Writer Identity Recognition and Confirmation Using Persian Handwritten Texts

    Directory of Open Access Journals (Sweden)

    aida sheikh

    2015-11-01

    Full Text Available     There are many ways to recognize the identity of individuals and authenticate them. Recognition and authentication of individuals with the help of their handwriting is regarded as a research topic in recent years. It is widely used in the field of security, legal, access control and financial activities. This article tries to examines the identification and authentication of individuals in Persian (Farsi handwritten texts so that the identity of the author can be determined with a handwritten text. The proposed system for recognizing the identity of the author in this study can be divided into two main parts: one part is intended for training and the other for testing. To assess the performance of introduced characteristics, the Hidden Markov Model is used as the classifier; thus, a model is defined for each angular characteristic. The defined angular models are connected by a specific chain network to form a comprehensive database for classification. This database is then used to determine and authenticate the author.

  6. 人脸表情识别综述%Summary of facial expression recognition

    Institute of Scientific and Technical Information of China (English)

    王大伟; 周军; 梅红岩; 张素娥

    2014-01-01

    人脸表情识别作为情感计算的一个研究方向,构成了情感理解的基础,是实现人机交互智能的前提。人脸表情的极度细腻化消耗了大量的计算时间,影响了人机交互的时效性和体验感,所以人脸表情特征提取成为人脸表情识别的重要研究课题。总结了国内外近五年的人脸表情识别的稳固框架和新进展,主要针对人脸表情特征提取和表情分类方法进行了归纳,详细介绍了这两方面的主要算法及改进,并分析比较了各种算法的优势与不足。通过对国内外人脸表情识别应用中实际问题进行研究,给出了人脸表情识别方面仍然存在的挑战及不足。%As a research direction of the affective computing, facial expression recognition which constitutes the basis of emotion understanding, is the premise to complete human-computer interaction intelligent. Facial expression is so exqui-site that it consumes a large amount of computation time and influences the timeliness and experience feeling from human-computer interaction intelligent. Consequently, facial feature extraction has become an important research topic in the area of facial expression recognition. The progress and stable framework for facial expression recognition in recent five years are generalized, a serial of algorithms applied in feature extraction and expression classification are summarized, Then, the main algorithms and their improvement are described in detail, and advantages and disadvantages among in different algorithms are analyzed and compared. In the same time, comparison with the other algorithms is also introduced. The challenges and shortcomings are pointed out by the research of practical problems in facial expression recognition application.

  7. Recognition of emotion in facial expression by people with Prader-Willi syndrome.

    Science.gov (United States)

    Whittington, J; Holland, T

    2011-01-01

    People with Prader-Willi syndrome (PWS) may have mild intellectual impairments but less is known about their social cognition. Most parents/carers report that people with PWS do not have normal peer relationships, although some have older or younger friends. Two specific aspects of social cognition are being able to recognise other people's emotion and to then respond appropriately. In a previous study, mothers/carers thought that 26% of children and 23% of adults with PWS would not respond to others' feelings. They also thought that 64% could recognise happiness, sadness, anger and fear and a further 30% could recognise happiness and sadness. However, reports of emotion recognition and response to emotion were partially dissociated. It was therefore decided to test facial emotion recognition directly. The participants were 58 people of all ages with PWS. They were shown a total of 20 faces, each depicting one of the six basic emotions and asked to say what they thought that person was feeling. The faces were shown one at a time in random order and each was accompanied by a reminder of the six basic emotions. This cohort of people with PWS correctly identified 55% of the different facial emotions. These included 90% of happy faces, 55% each of sad and surprised faces, 43% of disgusted faces, 40% of angry faces and 37% of fearful faces. Genetic subtype differences were found only in the predictors of recognition scores, not in the scores themselves. Selective impairment was found in fear recognition for those with PWS who had had a depressive illness and in anger recognition for those with PWS who had had a psychotic illness. The inability to read facial expressions of emotion is a deficit in social cognition apparent in people with PWS. This may be a contributing factor in their difficulties with peer relationships. © 2010 The Authors. Journal of Intellectual Disability Research © 2010 Blackwell Publishing Ltd.

  8. Detecting subtle facial emotion recognition deficits in high-functioning Autism using dynamic stimuli of varying intensities.

    Science.gov (United States)

    Law Smith, Miriam J; Montagne, Barbara; Perrett, David I; Gill, Michael; Gallagher, Louise

    2010-07-01

    Autism Spectrum Disorders (ASD) are characterised by social and communication impairment, yet evidence for deficits in the ability to recognise facial expressions of basic emotions is conflicting. Many studies reporting no deficits have used stimuli that may be too simple (with associated ceiling effects), for example, 100% 'full-blown' expressions. In order to investigate subtle deficits in facial emotion recognition, 21 adolescent males with high-functioning Austism Spectrum Disorders (ASD) and 16 age and IQ matched typically developing control males completed a new sensitive test of facial emotion recognition which uses dynamic stimuli of varying intensities of expressions of the six basic emotions (Emotion Recognition Test; Montagne et al., 2007). Participants with ASD were found to be less accurate at processing the basic emotional expressions of disgust, anger and surprise; disgust recognition was most impaired--at 100% intensity and lower levels, whereas recognition of surprise and anger were intact at 100% but impaired at lower levels of intensity.

  9. Violent video game players and non-players differ on facial emotion recognition.

    Science.gov (United States)

    Diaz, Ruth L; Wong, Ulric; Hodgins, David C; Chiu, Carina G; Goghari, Vina M

    2016-01-01

    Violent video game playing has been associated with both positive and negative effects on cognition. We examined whether playing two or more hours of violent video games a day, compared to not playing video games, was associated with a different pattern of recognition of five facial emotions, while controlling for general perceptual and cognitive differences that might also occur. Undergraduate students were categorized as violent video game players (n = 83) or non-gamers (n = 69) and completed a facial recognition task, consisting of an emotion recognition condition and a control condition of gender recognition. Additionally, participants completed questionnaires assessing their video game and media consumption, aggression, and mood. Violent video game players recognized fearful faces both more accurately and quickly and disgusted faces less accurately than non-gamers. Desensitization to violence, constant exposure to fear and anxiety during game playing, and the habituation to unpleasant stimuli, are possible mechanisms that could explain these results. Future research should evaluate the effects of violent video game playing on emotion processing and social cognition more broadly.

  10. Neuroanatomical correlates of impaired decision-making and facial emotion recognition in early Parkinson's disease.

    Science.gov (United States)

    Ibarretxe-Bilbao, Naroa; Junque, Carme; Tolosa, Eduardo; Marti, Maria-Jose; Valldeoriola, Francesc; Bargallo, Nuria; Zarei, Mojtaba

    2009-09-01

    Decision-making and recognition of emotions are often impaired in patients with Parkinson's disease (PD). The orbitofrontal cortex (OFC) and the amygdala are critical structures subserving these functions. This study was designed to test whether there are any structural changes in these areas that might explain the impairment of decision-making and recognition of facial emotions in early PD. We used the Iowa Gambling Task (IGT) and the Ekman 60 faces test which are sensitive to the integrity of OFC and amygdala dysfunctions in 24 early PD patients and 24 controls. High-resolution structural magnetic resonance images (MRI) were also obtained. Group analysis using voxel-based morphometry (VBM) showed significant and corrected (P Ekman test performance in PD patients. We conclude that: (i) impairment in decision-making and recognition of facial emotions occurs at the early stages of PD, (ii) these neuropsychological deficits are accompanied by degeneration of OFC and amygdala, and (iii) bilateral OFC reductions are associated with impaired recognition of emotions, and GM volume loss in left lateral OFC is related to decision-making impairment in PD.

  11. Human facial neural activities and gesture recognition for machine-interfacing applications

    Directory of Open Access Journals (Sweden)

    Hamedi M

    2011-12-01

    Full Text Available M Hamedi1, Sh-Hussain Salleh2, TS Tan2, K Ismail2, J Ali3, C Dee-Uam4, C Pavaganun4, PP Yupapin51Faculty of Biomedical and Health Science Engineering, Department of Biomedical Instrumentation and Signal Processing, University of Technology Malaysia, Skudai, 2Centre for Biomedical Engineering Transportation Research Alliance, 3Institute of Advanced Photonics Science, Nanotechnology Research Alliance, University of Technology Malaysia (UTM, Johor Bahru, Malaysia; 4College of Innovative Management, Valaya Alongkorn Rajabhat University, Pathum Thani, 5Nanoscale Science and Engineering Research Alliance (N'SERA, Advanced Research Center for Photonics, Faculty of Science, King Mongkut's Institute of Technology Ladkrabang, Bangkok, ThailandAbstract: The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human–machine interface (HMI technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2–11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy

  12. Unraveling the distributed neural code of facial identity through spatiotemporal pattern analysis.

    Science.gov (United States)

    Nestor, Adrian; Plaut, David C; Behrmann, Marlene

    2011-06-14

    Face individuation is one of the most impressive achievements of our visual system, and yet uncovering the neural mechanisms subserving this feat appears to elude traditional approaches to functional brain data analysis. The present study investigates the neural code of facial identity perception with the aim of ascertaining its distributed nature and informational basis. To this end, we use a sequence of multivariate pattern analyses applied to functional magnetic resonance imaging (fMRI) data. First, we combine information-based brain mapping and dynamic discrimination analysis to locate spatiotemporal patterns that support face classification at the individual level. This analysis reveals a network of fusiform and anterior temporal areas that carry information about facial identity and provides evidence that the fusiform face area responds with distinct patterns of activation to different face identities. Second, we assess the information structure of the network using recursive feature elimination. We find that diagnostic information is distributed evenly among anterior regions of the mapped network and that a right anterior region of the fusiform gyrus plays a central role within the information network mediating face individuation. These findings serve to map out and characterize a cortical system responsible for individuation. More generally, in the context of functionally defined networks, they provide an account of distributed processing grounded in information-based architectures.

  13. Matching novel face and voice identity using static and dynamic facial images.

    Science.gov (United States)

    Smith, Harriet M J; Dunn, Andrew K; Baguley, Thom; Stacey, Paula C

    2016-04-01

    Research investigating whether faces and voices share common source identity information has offered contradictory results. Accurate face-voice matching is consistently above chance when the facial stimuli are dynamic, but not when the facial stimuli are static. We tested whether procedural differences might help to account for the previous inconsistencies. In Experiment 1, participants completed a sequential two-alternative forced choice matching task. They either heard a voice and then saw two faces or saw a face and then heard two voices. Face-voice matching was above chance when the facial stimuli were dynamic and articulating, but not when they were static. In Experiment 2, we tested whether matching was more accurate when faces and voices were presented simultaneously. The participants saw two face-voice combinations, presented one after the other. They had to decide which combination was the same identity. As in Experiment 1, only dynamic face-voice matching was above chance. In Experiment 3, participants heard a voice and then saw two static faces presented simultaneously. With this procedure, static face-voice matching was above chance. The overall results, analyzed using multilevel modeling, showed that voices and dynamic articulating faces, as well as voices and static faces, share concordant source identity information. It seems, therefore, that above-chance static face-voice matching is sensitive to the experimental procedure employed. In addition, the inconsistencies in previous research might depend on the specific stimulus sets used; our multilevel modeling analyses show that some people look and sound more similar than others.

  14. Visual perception and processing in children with 22q11.2 deletion syndrome: associations with social cognition measures of face identity and emotion recognition.

    Science.gov (United States)

    McCabe, Kathryn L; Marlin, Stuart; Cooper, Gavin; Morris, Robin; Schall, Ulrich; Murphy, Declan G; Murphy, Kieran C; Campbell, Linda E

    2016-01-01

    People with 22q11.2 deletion syndrome (22q11DS) have difficulty processing social information including facial identity and emotion processing. However, difficulties with visual and attentional processes may play a role in difficulties observed with these social cognitive skills. A cross-sectional study investigated visual perception and processing as well as facial processing abilities in a group of 49 children and adolescents with 22q11DS and 30 age and socio-economic status-matched healthy sibling controls using the Birmingham Object Recognition Battery and face processing sub-tests from the MRC face processing skills battery. The 22q11DS group demonstrated poorer performance on all measures of visual perception and processing, with greatest impairment on perceptual processes relating to form perception as well as object recognition and memory. In addition, form perception was found to make a significant and unique contribution to higher order social-perceptual processing (face identity) in the 22q11DS group. The findings indicate evidence for impaired visual perception and processing capabilities in 22q11DS. In turn, these were found to influence cognitive skills needed for social processes such as facial identity recognition in the children with 22q11DS.

  15. The effects of acute alcohol intoxication on the cognitive mechanisms underlying false facial recognition.

    Science.gov (United States)

    Colloff, Melissa F; Flowe, Heather D

    2016-06-01

    False face recognition rates are sometimes higher when faces are learned while under the influence of alcohol. Alcohol myopia theory (AMT) proposes that acute alcohol intoxication during face learning causes people to attend to only the most salient features of a face, impairing the encoding of less salient facial features. Yet, there is currently no direct evidence to support this claim. Our objective was to test whether acute alcohol intoxication impairs face learning by causing subjects to attend to a salient (i.e., distinctive) facial feature over other facial features, as per AMT. We employed a balanced placebo design (N = 100). Subjects in the alcohol group were dosed to achieve a blood alcohol concentration (BAC) of 0.06 %, whereas the no alcohol group consumed tonic water. Alcohol expectancy was controlled. Subjects studied faces with or without a distinctive feature (e.g., scar, piercing). An old-new recognition test followed. Some of the test faces were "old" (i.e., previously studied), and some were "new" (i.e., not previously studied). We varied whether the new test faces had a previously studied distinctive feature versus other familiar characteristics. Intoxicated and sober recognition accuracy was comparable, but subjects in the alcohol group made more positive identifications overall compared to the no alcohol group. The results are not in keeping with AMT. Rather, a more general cognitive mechanism appears to underlie false face recognition in intoxicated subjects. Specifically, acute alcohol intoxication during face learning results in more liberal choosing, perhaps because of an increased reliance on familiarity.

  16. The telltale face: possible mechanisms behind defector and cooperator recognition revealed by emotional facial expression metrics.

    Science.gov (United States)

    Kovács-Bálint, Zsófia; Bereczkei, Tamás; Hernádi, István

    2013-11-01

    In this study, we investigated the role of facial cues in cooperator and defector recognition. First, a face image database was constructed from pairs of full face portraits of target subjects taken at the moment of decision-making in a prisoner's dilemma game (PDG) and in a preceding neutral task. Image pairs with no deficiencies (n = 67) were standardized for orientation and luminance. Then, confidence in defector and cooperator recognition was tested with image rating in a different group of lay judges (n = 62). Results indicate that (1) defectors were better recognized (58% vs. 47%), (2) they looked different from cooperators (p facial microexpression analysis, defection was strongly linked with depressed lower lips and less opened eyes. Significant correlation was found between the intensity of micromimics and the rating of images in the cooperator-defector dimension. In summary, facial expressions can be considered as reliable indicators of momentary social dispositions in the PDG. Females may exhibit an evolutionary-based overestimation bias to detecting social visual cues of the defector face.

  17. Recognition of facial emotion and perceived parental bonding styles in healthy volunteers and personality disorder patients.

    Science.gov (United States)

    Zheng, Leilei; Chai, Hao; Chen, Wanzhen; Yu, Rongrong; He, Wei; Jiang, Zhengyan; Yu, Shaohua; Li, Huichun; Wang, Wei

    2011-12-01

    Early parental bonding experiences play a role in emotion recognition and expression in later adulthood, and patients with personality disorder frequently experience inappropriate parental bonding styles, therefore the aim of the present study was to explore whether parental bonding style is correlated with recognition of facial emotion in personality disorder patients. The Parental Bonding Instrument (PBI) and the Matsumoto and Ekman Japanese and Caucasian Facial Expressions of Emotion (JACFEE) photo set tests were carried out in 289 participants. Patients scored lower on parental Care but higher on parental Freedom Control and Autonomy Denial subscales, and they displayed less accuracy when recognizing contempt, disgust and happiness than the healthy volunteers. In healthy volunteers, maternal Autonomy Denial significantly predicted accuracy when recognizing fear, and maternal Care predicted the accuracy of recognizing sadness. In patients, paternal Care negatively predicted the accuracy of recognizing anger, paternal Freedom Control predicted the perceived intensity of contempt, maternal Care predicted the accuracy of recognizing sadness, and the intensity of disgust. Parenting bonding styles have an impact on the decoding process and sensitivity when recognizing facial emotions, especially in personality disorder patients. © 2011 The Authors. Psychiatry and Clinical Neurosciences © 2011 Japanese Society of Psychiatry and Neurology.

  18. Facial expression recognition and histograms of oriented gradients: a comprehensive study.

    Science.gov (United States)

    Carcagnì, Pierluigi; Del Coco, Marco; Leo, Marco; Distante, Cosimo

    2015-01-01

    Automatic facial expression recognition (FER) is a topic of growing interest mainly due to the rapid spread of assistive technology applications, as human-robot interaction, where a robust emotional awareness is a key point to best accomplish the assistive task. This paper proposes a comprehensive study on the application of histogram of oriented gradients (HOG) descriptor in the FER problem, highlighting as this powerful technique could be effectively exploited for this purpose. In particular, this paper highlights that a proper set of the HOG parameters can make this descriptor one of the most suitable to characterize facial expression peculiarities. A large experimental session, that can be divided into three different phases, was carried out exploiting a consolidated algorithmic pipeline. The first experimental phase was aimed at proving the suitability of the HOG descriptor to characterize facial expression traits and, to do this, a successful comparison with most commonly used FER frameworks was carried out. In the second experimental phase, different publicly available facial datasets were used to test the system on images acquired in different conditions (e.g. image resolution, lighting conditions, etc.). As a final phase, a test on continuous data streams was carried out on-line in order to validate the system in real-world operating conditions that simulated a real-time human-machine interaction.

  19. Reduced Recognition of Dynamic Facial Emotional Expressions and Emotion-Specific Response Bias in Children with an Autism Spectrum Disorder

    Science.gov (United States)

    Evers, Kris; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2015-01-01

    Emotion labelling was evaluated in two matched samples of 6-14-year old children with and without an autism spectrum disorder (ASD; N = 45 and N = 50, resp.), using six dynamic facial expressions. The Emotion Recognition Task proved to be valuable demonstrating subtle emotion recognition difficulties in ASD, as we showed a general poorer emotion…

  20. The Effect of Repeated Ketamine Infusion Over Facial Emotion Recognition in Treatment-Resistant Depression: A Preliminary Report.

    Science.gov (United States)

    Shiroma, Paulo R; Albott, C Sophia; Johns, Brian; Thuras, Paul; Wels, Joseph; Lim, Kelvin O

    2015-01-01

    In contrast to improvement in emotion recognition bias by traditional antidepressants, the authors report preliminary findings that changes in facial emotion recognition are not associated with response of depressive symptoms after repeated ketamine infusions or relapse during follow-up in treatment-resistant depression.

  1. Reduced Recognition of Dynamic Facial Emotional Expressions and Emotion-Specific Response Bias in Children with an Autism Spectrum Disorder

    Science.gov (United States)

    Evers, Kris; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2015-01-01

    Emotion labelling was evaluated in two matched samples of 6-14-year old children with and without an autism spectrum disorder (ASD; N = 45 and N = 50, resp.), using six dynamic facial expressions. The Emotion Recognition Task proved to be valuable demonstrating subtle emotion recognition difficulties in ASD, as we showed a general poorer emotion…

  2. Are there differential deficits in facial emotion recognition between paranoid and non-paranoid schizophrenia? A signal detection analysis.

    Science.gov (United States)

    Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long

    2013-10-30

    This study assessed facial emotion recognition abilities in subjects with paranoid and non-paranoid schizophrenia (NPS) using signal detection theory. We explore the differential deficits in facial emotion recognition in 44 paranoid patients with schizophrenia (PS) and 30 non-paranoid patients with schizophrenia (NPS), compared to 80 healthy controls. We used morphed faces with different intensities of emotion and computed the sensitivity index (d') of each emotion. The results showed that performance differed between the schizophrenia and healthy controls groups in the recognition of both negative and positive affects. The PS group performed worse than the healthy controls group but better than the NPS group in overall performance. Performance differed between the NPS and healthy controls groups in the recognition of all basic emotions and neutral faces; between the PS and healthy controls groups in the recognition of angry faces; and between the PS and NPS groups in the recognition of happiness, anger, sadness, disgust, and neutral affects. The facial emotion recognition impairment in schizophrenia may reflect a generalized deficit rather than a negative-emotion specific deficit. The PS group performed worse than the control group, but better than the NPS group in facial expression recognition, with differential deficits between PS and NPS patients.

  3. Is the emotion recognition deficit associated with frontotemporal dementia caused by selective inattention to diagnostic facial features?

    Science.gov (United States)

    Oliver, Lindsay D; Virani, Karim; Finger, Elizabeth C; Mitchell, Derek G V

    2014-07-01

    Frontotemporal dementia (FTD) is a debilitating neurodegenerative disorder characterized by severely impaired social and emotional behaviour, including emotion recognition deficits. Though fear recognition impairments seen in particular neurological and developmental disorders can be ameliorated by reallocating attention to critical facial features, the possibility that similar benefits can be conferred to patients with FTD has yet to be explored. In the current study, we examined the impact of presenting distinct regions of the face (whole face, eyes-only, and eyes-removed) on the ability to recognize expressions of anger, fear, disgust, and happiness in 24 patients with FTD and 24 healthy controls. A recognition deficit was demonstrated across emotions by patients with FTD relative to controls. Crucially, removal of diagnostic facial features resulted in an appropriate decline in performance for both groups; furthermore, patients with FTD demonstrated a lack of disproportionate improvement in emotion recognition accuracy as a result of isolating critical facial features relative to controls. Thus, unlike some neurological and developmental disorders featuring amygdala dysfunction, the emotion recognition deficit observed in FTD is not likely driven by selective inattention to critical facial features. Patients with FTD also mislabelled negative facial expressions as happy more often than controls, providing further evidence for abnormalities in the representation of positive affect in FTD. This work suggests that the emotional expression recognition deficit associated with FTD is unlikely to be rectified by adjusting selective attention to diagnostic features, as has proven useful in other select disorders.

  4. The relationship between facial emotion recognition and executive functions in first-episode patients with schizophrenia and their siblings.

    Science.gov (United States)

    Yang, Chengqing; Zhang, Tianhong; Li, Zezhi; Heeramun-Aubeeluck, Anisha; Liu, Na; Huang, Nan; Zhang, Jie; He, Leiying; Li, Hui; Tang, Yingying; Chen, Fazhan; Liu, Fei; Wang, Jijun; Lu, Zheng

    2015-10-08

    Although many studies have examined executive functions and facial emotion recognition in people with schizophrenia, few of them focused on the correlation between them. Furthermore, their relationship in the siblings of patients also remains unclear. The aim of the present study is to examine the correlation between executive functions and facial emotion recognition in patients with first-episode schizophrenia and their siblings. Thirty patients with first-episode schizophrenia, their twenty-six siblings, and thirty healthy controls were enrolled. They completed facial emotion recognition tasks using the Ekman Standard Faces Database, and executive functioning was measured by Wisconsin Card Sorting Test (WCST). Hierarchical regression analysis was applied to assess the correlation between executive functions and facial emotion recognition. Our study found that in siblings, the accuracy in recognizing low degree 'disgust' emotion was negatively correlated with the total correct rate in WCST (r = -0.614, p = 0.023), but was positively correlated with the total error in WCST (r = 0.623, p = 0.020); the accuracy in recognizing 'neutral' emotion was positively correlated with the total error rate in WCST (r = 0.683, p = 0.014) while negatively correlated with the total correct rate in WCST (r = -0.677, p = 0.017). People with schizophrenia showed an impairment in facial emotion recognition when identifying moderate 'happy' facial emotion, the accuracy of which was significantly correlated with the number of completed categories of WCST (R(2) = 0.432, P emotion recognition in the healthy control group. Our study demonstrated that facial emotion recognition impairment correlated with executive function impairment in people with schizophrenia and their unaffected siblings but not in healthy controls.

  5. Facial and prosodic emotion recognition deficits associate with specific clusters of psychotic symptoms in schizophrenia.

    Directory of Open Access Journals (Sweden)

    Huai-Hsuan Tseng

    Full Text Available BACKGROUND: Patients with schizophrenia perform significantly worse on emotion recognition tasks than healthy participants across several sensory modalities. Emotion recognition abilities are correlated with the severity of clinical symptoms, particularly negative symptoms. However, the relationships between specific deficits of emotion recognition across sensory modalities and the presentation of psychotic symptoms remain unclear. The current study aims to explore how emotion recognition ability across modalities and neurocognitive function correlate with clusters of psychotic symptoms in patients with schizophrenia. METHODS: 111 participants who met the DSM-IV diagnostic criteria for schizophrenia and 70 healthy participants performed on a dual-modality emotion recognition task, the Diagnostic Analysis of Nonverbal Accuracy 2-Taiwan version (DANVA-2-TW, and selected subscales of WAIS-III. Of all, 92 patients received neurocognitive evaluations, including CPT and WCST. These patients also received the PANSS for clinical evaluation of symptomatology. RESULTS: The emotion recognition ability of patients with schizophrenia was significantly worse than healthy participants in both facial and vocal modalities, particularly fearful emotion. An inverse correlation was noted between PANSS total score and recognition accuracy for happy emotion. The difficulty of happy emotion recognition and earlier age of onset, together with the perseveration error in WCST predicted total PANSS score. Furthermore, accuracy of happy emotion and the age of onset were the only two significant predictors of delusion/hallucination. All the associations with happy emotion recognition primarily concerned happy prosody. DISCUSSION: Deficits in emotional processing in specific categories, i.e. in happy emotion, together with deficit in executive function, may reflect dysfunction of brain systems underlying severity of psychotic symptoms, in particular the positive dimension.

  6. Design and Realization of Web for Facial Recognition%Web方式人脸识别的设计与实现

    Institute of Scientific and Technical Information of China (English)

    闾素红; 任艳娜

    2012-01-01

    人脸识别技术涉及模式识别、图像处理、计算机视觉等多种学科知识,在近些年来一直是研究的热点,本文将人脸识别技术与数字视频监控技术相结合,设计了一种基于WEB方式下的远程人脸识别监控系统.%Facial recognition relates to many disciplines such as Pattern Recognition,Image processing ,Computer Vision, and so on. It is a hot issue recently. This paper combines Facial recognition technology and Video surveillance technology, and designs system of Web for facial recognition.

  7. Analysis, Interpretation, and Recognition of Facial Action Units and Expressions Using Neuro-Fuzzy Modeling

    CERN Document Server

    Khademi, Mahmoud; Manzuri-Shalmani, Mohammad T; Kiaei, Ali A

    2010-01-01

    In this paper an accurate real-time sequence-based system for representation, recognition, interpretation, and analysis of the facial action units (AUs) and expressions is presented. Our system has the following characteristics: 1) employing adaptive-network-based fuzzy inference systems (ANFIS) and temporal information, we developed a classification scheme based on neuro-fuzzy modeling of the AU intensity, which is robust to intensity variations, 2) using both geometric and appearance-based features, and applying efficient dimension reduction techniques, our system is robust to illumination changes and it can represent the subtle changes as well as temporal information involved in formation of the facial expressions, and 3) by continuous values of intensity and employing top-down hierarchical rule-based classifiers, we can develop accurate human-interpretable AU-to-expression converters. Extensive experiments on Cohn-Kanade database show the superiority of the proposed method, in comparison with support vect...

  8. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  9. Facial Cosmetics and Attractiveness: Comparing the Effect Sizes of Professionally-Applied Cosmetics and Identity.

    Science.gov (United States)

    Jones, Alex L; Kramer, Robin S S

    2016-01-01

    Forms of body decoration exist in all human cultures. However, in Western societies, women are more likely to engage in appearance modification, especially through the use of facial cosmetics. How effective are cosmetics at altering attractiveness? Previous research has hinted that the effect is not large, especially when compared to the variation in attractiveness observed between individuals due to differences in identity. In order to build a fuller understanding of how cosmetics and identity affect attractiveness, here we examine how professionally-applied cosmetics alter attractiveness and compare this effect with the variation in attractiveness observed between individuals. In Study 1, 33 YouTube models were rated for attractiveness before and after the application of professionally-applied cosmetics. Cosmetics explained a larger proportion of the variation in attractiveness compared with previous studies, but this effect remained smaller than variation caused by differences in attractiveness between individuals. Study 2 replicated the results of the first study with a sample of 45 supermodels, with the aim of examining the effect of cosmetics in a sample of faces with low variation in attractiveness between individuals. While the effect size of cosmetics was generally large, between-person variability due to identity remained larger. Both studies also found interactions between cosmetics and identity-more attractive models received smaller increases when cosmetics were worn. Overall, we show that professionally-applied cosmetics produce a larger effect than self-applied cosmetics, an important theoretical consideration for the field. However, the effect of individual differences in facial appearance is ultimately more important in perceptions of attractiveness.

  10. Comparing Facial Emotional Recognition in Patients with Borderline Personality Disorder and Patients with Schizotypal Personality Disorder with a Normal Group

    Directory of Open Access Journals (Sweden)

    Aida Farsham

    2017-04-01

    Full Text Available Objective: No research has been conducted on facial emotional recognition on patients with borderline personality disorder (BPD and schizotypal personality disorder (SPD. The present study aimed at comparing facial emotion recognition in these patients with the general population. The neurocognitive processing of emotions can show the pathologic style of these 2 disorders. Method:  Twenty BPD patients, 16 SPD patients, and 20 healthy individuals were selected by available sampling method. Structural Clinical Interview for Axis II, Millon Personality Inventory, Beck Depression Inventory and Facial Emotional Recognition Test was were conducted for all participants.Discussion: The results of one way ANOVA and Scheffe’s post hoc test analysis revealed significant differences in neuropsychology assessment of  facial emotional recognition between BPD and  SPD patients with normal group (p = 0/001. A significant difference was found in emotion recognition of fear between the 2 groups of BPD and normal population (p = 0/008. A significant difference was observed between SPD patients and control group in emotion recognition of wonder (p = 0/04(.The obtained results indicated a deficit in negative emotion recognition, especially disgust emotion, thus, it can be concluded that these patients have the same neurocognitive profile in the emotion domain.

  11. Facial Action Unit Recognition under Incomplete Data Based on Multi-label Learning with Missing Labels

    KAUST Repository

    Li, Yongqiang

    2016-07-07

    Facial action unit (AU) recognition has been applied in a wild range of fields, and has attracted great attention in the past two decades. Most existing works on AU recognition assumed that the complete label assignment for each training image is available, which is often not the case in practice. Labeling AU is expensive and time consuming process. Moreover, due to the AU ambiguity and subjective difference, some AUs are difficult to label reliably and confidently. Many AU recognition works try to train the classifier for each AU independently, which is of high computation cost and ignores the dependency among different AUs. In this work, we formulate AU recognition under incomplete data as a multi-label learning with missing labels (MLML) problem. Most existing MLML methods usually employ the same features for all classes. However, we find this setting is unreasonable in AU recognition, as the occurrence of different AUs produce changes of skin surface displacement or face appearance in different face regions. If using the shared features for all AUs, much noise will be involved due to the occurrence of other AUs. Consequently, the changes of the specific AUs cannot be clearly highlighted, leading to the performance degradation. Instead, we propose to extract the most discriminative features for each AU individually, which are learned by the supervised learning method. The learned features are further embedded into the instance-level label smoothness term of our model, which also includes the label consistency and the class-level label smoothness. Both a global solution using st-cut and an approximated solution using conjugate gradient (CG) descent are provided. Experiments on both posed and spontaneous facial expression databases demonstrate the superiority of the proposed method in comparison with several state-of-the-art works.

  12. Identity negative priming: a phenomenon of perception, recognition or selection?

    Science.gov (United States)

    Schrobsdorff, Hecke; Ihrke, Matthias; Behrendt, Jörg; Herrmann, J Michael; Hasselhorn, Marcus

    2012-01-01

    The present study addresses the problem whether negative priming (NP) is due to information processing in perception, recognition or selection. We argue that most NP studies confound priming and perceptual similarity of prime-probe episodes and implement a color-switch paradigm in order to resolve the issue. In a series of three identity negative priming experiments with verbal naming response, we determined when NP and positive priming (PP) occur during a trial. The first experiment assessed the impact of target color on priming effects. It consisted of two blocks, each with a different fixed target color. With respect to target color no differential priming effects were found. In Experiment 2 the target color was indicated by a cue for each trial. Here we resolved the confounding of perceptual similarity and priming condition. In trials with coinciding colors for prime and probe, we found priming effects similar to Experiment 1. However, trials with a target color switch showed such effects only in trials with role-reversal (distractor-to-target or target-to-distractor), whereas the positive priming (PP) effect in the target-repetition trials disappeared. Finally, Experiment 3 split trial processing into two phases by presenting the trial-wise color cue only after the stimulus objects had been recognized. We found recognition in every priming condition to be faster than in control trials. We were hence led to the conclusion that PP is strongly affected by perception, in contrast to NP which emerges during selection, i.e., the two effects cannot be explained by a single mechanism.

  13. Identity negative priming: a phenomenon of perception, recognition or selection?

    Directory of Open Access Journals (Sweden)

    Hecke Schrobsdorff

    Full Text Available The present study addresses the problem whether negative priming (NP is due to information processing in perception, recognition or selection. We argue that most NP studies confound priming and perceptual similarity of prime-probe episodes and implement a color-switch paradigm in order to resolve the issue. In a series of three identity negative priming experiments with verbal naming response, we determined when NP and positive priming (PP occur during a trial. The first experiment assessed the impact of target color on priming effects. It consisted of two blocks, each with a different fixed target color. With respect to target color no differential priming effects were found. In Experiment 2 the target color was indicated by a cue for each trial. Here we resolved the confounding of perceptual similarity and priming condition. In trials with coinciding colors for prime and probe, we found priming effects similar to Experiment 1. However, trials with a target color switch showed such effects only in trials with role-reversal (distractor-to-target or target-to-distractor, whereas the positive priming (PP effect in the target-repetition trials disappeared. Finally, Experiment 3 split trial processing into two phases by presenting the trial-wise color cue only after the stimulus objects had been recognized. We found recognition in every priming condition to be faster than in control trials. We were hence led to the conclusion that PP is strongly affected by perception, in contrast to NP which emerges during selection, i.e., the two effects cannot be explained by a single mechanism.

  14. Development of Facial Emotion Recognition in Childhood: Age-related Differences in a Shortened Version of the Facial Expression of Emotions - Stimuli and Tests. Data from an ongoing study.

    NARCIS (Netherlands)

    Coenen, Maraike; Aarnoudse, Ceciel; Braams, O.; Veenstra, Wencke S.

    2014-01-01

    OBJECTIVE: Facial emotion recognition is a crucial aspect of social cognition and deficits have been shown to be related to psychiatric disorders in adults and children. However, the development of facial emotion recognition is less clear (Herba & Philips, 2004) and an appropriate instrument to meas

  15. Development of Facial Emotion Recognition in Childhood: Age-related Differences in a Shortened Version of the Facial Expression of Emotions - Stimuli and Tests. Data from an ongoing study.

    NARCIS (Netherlands)

    Coenen, Maraike; Aarnoudse, Ceciel; Braams, O.; Veenstra, Wencke S.

    2014-01-01

    OBJECTIVE: Facial emotion recognition is a crucial aspect of social cognition and deficits have been shown to be related to psychiatric disorders in adults and children. However, the development of facial emotion recognition is less clear (Herba & Philips, 2004) and an appropriate instrument to

  16. Distinct frontal and amygdala correlates of change detection for facial identity and expression

    Science.gov (United States)

    Achaibou, Amal; Loth, Eva

    2016-01-01

    Recruitment of ‘top-down’ frontal attentional mechanisms is held to support detection of changes in task-relevant stimuli. Fluctuations in intrinsic frontal activity have been shown to impact task performance more generally. Meanwhile, the amygdala has been implicated in ‘bottom-up’ attentional capture by threat. Here, 22 adult human participants took part in a functional magnetic resonance change detection study aimed at investigating the correlates of successful (vs failed) detection of changes in facial identity vs expression. For identity changes, we expected prefrontal recruitment to differentiate ‘hit’ from ‘miss’ trials, in line with previous reports. Meanwhile, we postulated that a different mechanism would support detection of emotionally salient changes. Specifically, elevated amygdala activation was predicted to be associated with successful detection of threat-related changes in expression, over-riding the influence of fluctuations in top-down attention. Our findings revealed that fusiform activity tracked change detection across conditions. Ventrolateral prefrontal cortical activity was uniquely linked to detection of changes in identity not expression, and amygdala activity to detection of changes from neutral to fearful expressions. These results are consistent with distinct mechanisms supporting detection of changes in face identity vs expression, the former potentially reflecting top-down attention, the latter bottom-up attentional capture by stimulus emotional salience. PMID:26245835

  17. Distinct frontal and amygdala correlates of change detection for facial identity and expression.

    Science.gov (United States)

    Achaibou, Amal; Loth, Eva; Bishop, Sonia J

    2016-02-01

    Recruitment of 'top-down' frontal attentional mechanisms is held to support detection of changes in task-relevant stimuli. Fluctuations in intrinsic frontal activity have been shown to impact task performance more generally. Meanwhile, the amygdala has been implicated in 'bottom-up' attentional capture by threat. Here, 22 adult human participants took part in a functional magnetic resonance change detection study aimed at investigating the correlates of successful (vs failed) detection of changes in facial identity vs expression. For identity changes, we expected prefrontal recruitment to differentiate 'hit' from 'miss' trials, in line with previous reports. Meanwhile, we postulated that a different mechanism would support detection of emotionally salient changes. Specifically, elevated amygdala activation was predicted to be associated with successful detection of threat-related changes in expression, over-riding the influence of fluctuations in top-down attention. Our findings revealed that fusiform activity tracked change detection across conditions. Ventrolateral prefrontal cortical activity was uniquely linked to detection of changes in identity not expression, and amygdala activity to detection of changes from neutral to fearful expressions. These results are consistent with distinct mechanisms supporting detection of changes in face identity vs expression, the former potentially reflecting top-down attention, the latter bottom-up attentional capture by stimulus emotional salience.

  18. Enhanced retinal modeling for face recognition and facial feature point detection under complex illumination conditions

    Science.gov (United States)

    Cheng, Yong; Li, Zuoyong; Jiao, Liangbao; Lu, Hong; Cao, Xuehong

    2016-07-01

    We improved classic retinal modeling to alleviate the adverse effect of complex illumination on face recognition and extracted robust image features. Our improvements on classic retinal modeling included three aspects. First, a combined filtering scheme was applied to simulate functions of horizontal and amacrine cells for accurate local illumination estimation. Second, we developed an optimal threshold method for illumination classification. Finally, we proposed an adaptive factor acquisition model based on the arctangent function. Experimental results on the combined Yale B; the Carnegie Mellon University poses, illumination, and expression; and the Labeled Face Parts in the Wild databases show that the proposed method can effectively alleviate illumination difference of images under complex illumination conditions, which is helpful for improving the accuracy of face recognition and that of facial feature point detection.

  19. Subjective disturbance of perception is related to facial affect recognition in schizophrenia.

    Science.gov (United States)

    Comparelli, Anna; De Carolis, Antonella; Corigliano, Valentina; Romano, Silvia; Kotzalidis, Giorgio D; Campana, Chiara; Ferracuti, Stefano; Tatarelli, Roberto; Girardi, Paolo

    2011-10-01

    To examine the relationship between facial affect recognition (FAR) and subjective perceptual disturbances (SPDs), we assessed SPDs in 82 patients with DSM-IV schizophrenia (44 with first-episode psychosis [FEP] and 38 with multiple episodes [ME]) using two subscales of the Frankfurt Complaint Questionnaire (FCQ), WAS (simple perception) and WAK (complex perception). Emotional judgment ability was assessed using Ekman and Friesen's FAR task. Impaired recognition of emotion correlated with scores on the WAS but not on the WAK. The association was significant in the entire group and in the ME group. FAR was more impaired in the ME than in the FEP group. Our findings suggest that there is a relationship between SPDs and FAR impairment in schizophrenia, particularly in multiple-episode patients.

  20. Facial Expression Recognition by Supervised Independent Component Analysis Using MAP Estimation

    Science.gov (United States)

    Chen, Fan; Kotani, Kazunori

    Permutation ambiguity of the classical Independent Component Analysis (ICA) may cause problems in feature extraction for pattern classification. Especially when only a small subset of components is derived from data, these components may not be most distinctive for classification, because ICA is an unsupervised method. We include a selective prior for de-mixing coefficients into the classical ICA to alleviate the problem. Since the prior is constructed upon the classification information from the training data, we refer to the proposed ICA model with a selective prior as a supervised ICA (sICA). We formulated the learning rule for sICA by taking a Maximum a Posteriori (MAP) scheme and further derived a fixed point algorithm for learning the de-mixing matrix. We investigate the performance of sICA in facial expression recognition from the aspects of both correct rate of recognition and robustness even with few independent components.

  1. Emotion Recognition following Pediatric Traumatic Brain Injury: Longitudinal Analysis of Emotional Prosody and Facial Emotion Recognition

    Science.gov (United States)

    Schmidt, Adam T.; Hanten, Gerri R.; Li, Xiaoqi; Orsten, Kimberley D.; Levin, Harvey S.

    2010-01-01

    Children with closed head injuries often experience significant and persistent disruptions in their social and behavioral functioning. Studies with adults sustaining a traumatic brain injury (TBI) indicate deficits in emotion recognition and suggest that these difficulties may underlie some of the social deficits. The goal of the current study was…

  2. 人脸面部表情识别%Facial Expression Recognition

    Institute of Scientific and Technical Information of China (English)

    傅栩雨; 叶健东; 王鹏; 曾颖森

    2015-01-01

    In recent years, interaction and intelligence are issues attracting more and more attention. Facial expression recognition, as a significant part of artificial intelligence, enhances the friendly and intelligent human-machine interaction through facial emotion recognition. The paper describes the complete process of emotion recognition from the real-time images of camera to the final realization of emotion recognition and result display. The paper outlines the whole process instead of solely focusing on a part of the content. And the multiple aspects of content involved are introduced one by one, from theory to application. The specific methods of use are pointed out, and the modules with similar functions are selected and compared based on the practical application process.%近年来交互,智能成为了大家很关注的问题,人脸表情识别是人工智能中有重大意义的一部分,通过面部情绪的识别,增进人机交往的友好性和智能性。本文讲述了情绪识别的完整过程,从摄像头的实时影像中开始到最后实现情绪识别,显示识别结果。不单一地侧重某一部分内容,而进行整体过程的勾画。同时将涉及到的多方面内容从原理到应用讲逐个讲解,点明使用的具体方法,结合实际应用过程对功能相近的模块进行选择和对比。

  3. Facial emotion recognition system for autistic children: a feasible study based on FPGA implementation.

    Science.gov (United States)

    Smitha, K G; Vinod, A P

    2015-11-01

    Children with autism spectrum disorder have difficulty in understanding the emotional and mental states from the facial expressions of the people they interact. The inability to understand other people's emotions will hinder their interpersonal communication. Though many facial emotion recognition algorithms have been proposed in the literature, they are mainly intended for processing by a personal computer, which limits their usability in on-the-move applications where portability is desired. The portability of the system will ensure ease of use and real-time emotion recognition and that will aid for immediate feedback while communicating with caretakers. Principal component analysis (PCA) has been identified as the least complex feature extraction algorithm to be implemented in hardware. In this paper, we present a detailed study of the implementation of serial and parallel implementation of PCA in order to identify the most feasible method for realization of a portable emotion detector for autistic children. The proposed emotion recognizer architectures are implemented on Virtex 7 XC7VX330T FFG1761-3 FPGA. We achieved 82.3% detection accuracy for a word length of 8 bits.

  4. Structural correlates of facial emotion recognition deficits in Parkinson's disease patients.

    Science.gov (United States)

    Baggio, H C; Segura, B; Ibarretxe-Bilbao, N; Valldeoriola, F; Marti, M J; Compta, Y; Tolosa, E; Junqué, C

    2012-07-01

    The ability to recognize facial emotion expressions, especially negative ones, is described to be impaired in Parkinson's disease (PD) patients. Previous neuroimaging work evaluating the neural substrate of facial emotion recognition (FER) in healthy and pathological subjects has mostly focused on functional changes. This study was designed to evaluate gray matter (GM) and white matter (WM) correlates of FER in a large sample of PD. Thirty-nine PD patients and 23 healthy controls (HC) were tested with the Ekman 60 test for FER and with magnetic resonance imaging. Effects of associated depressive symptoms were taken into account. In accordance with previous studies, PD patients performed significantly worse in recognizing sadness, anger and disgust. In PD patients, voxel-based morphometry analysis revealed areas of positive correlation between individual emotion recognition and GM volume: in the right orbitofrontal cortex, amygdala and postcentral gyrus and sadness identification; in the right occipital fusiform gyrus, ventral striatum and subgenual cortex and anger identification, and in the anterior cingulate cortex (ACC) and disgust identification. WM analysis through diffusion tensor imaging revealed significant positive correlations between fractional anisotropy levels in the frontal portion of the right inferior fronto-occipital fasciculus and the performance in the identification of sadness. These findings shed light on the structural neural bases of the deficits presented by PD patients in this skill. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Elementary neurocognitive function, facial affect recognition and social-skills in schizophrenia.

    Science.gov (United States)

    Meyer, Melissa B; Kurtz, Matthew M

    2009-05-01

    Social-skill deficits are pervasive in schizophrenia and negatively impact many key aspects of functioning. Prior studies have found that measures of elementary neurocognition and social cognition are related to social-skills. In the present study we selected a range of neurocognitive measures and examined their relationship with identification of happy and sad faces and performance-based social-skills. Fifty-three patients with schizophrenia or schizoaffective disorder participated. Results revealed that: 1) visual vigilance, problem-solving and affect recognition were related to social-skill; 2) links between problem-solving and social-skill, but not visual vigilance and social-skill, remained significant when estimates of verbal intelligence were controlled; 3) affect recognition deficits explained unique variance in social-skill after neurocognitive variables were controlled; and 4) affect recognition deficits partially mediated the relationship of visual vigilance and social-skill. These results support the conclusion that facial affect recognition deficits are a crucial domain of impairment in schizophrenia that both contribute unique variance to social-skill deficits and may also mediate the relationship between some aspects of neurocognition and social-skill. These findings may help guide the development and refinement of cognitive and social-cognitive remediation methods for social-skill impairment.

  6. Face inversion decreased information about facial identity and expression in face-responsive neurons in macaque area TE.

    Science.gov (United States)

    Sugase-Miyamoto, Yasuko; Matsumoto, Narihisa; Ohyama, Kaoru; Kawano, Kenji

    2014-09-10

    To investigate the effect of face inversion and thatcherization (eye inversion) on temporal processing stages of facial information, single neuron activities in the temporal cortex (area TE) of two rhesus monkeys were recorded. Test stimuli were colored pictures of monkey faces (four with four different expressions), human faces (three with four different expressions), and geometric shapes. Modifications were made in each face-picture, and its four variations were used as stimuli: upright original, inverted original, upright thatcherized, and inverted thatcherized faces. A total of 119 neurons responded to at least one of the upright original facial stimuli. A majority of the neurons (71%) showed activity modulations depending on upright and inverted presentations, and a lesser number of neurons (13%) showed activity modulations depending on original and thatcherized face conditions. In the case of face inversion, information about the fine category (facial identity and expression) decreased, whereas information about the global category (monkey vs human vs shape) was retained for both the original and thatcherized faces. Principal component analysis on the neuronal population responses revealed that the global categorization occurred regardless of the face inversion and that the inverted faces were represented near the upright faces in the principal component analysis space. By contrast, the face inversion decreased the ability to represent human facial identity and monkey facial expression. Thus, the neuronal population represented inverted faces as faces but failed to represent the identity and expression of the inverted faces, indicating that the neuronal representation in area TE cause the perceptual effect of face inversion.

  7. Neural processing of facial identity and emotion in infants at high risk for autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Sharon Elizabeth Fox

    2013-04-01

    Full Text Available Deficits in face processing and social impairment are core characteristics of autism spectrum disorder. The present work examined 7 month-old infants at high risk for developing autism and typically developing controls at low risk, using a face perception task designed to differentiate between the effects of face identity and facial emotions on neural response using functional Near Infrared Spectroscopy (fNIRS. In addition, we employed independent component analysis (ICA, as well as a novel method of condition-related component selection and classification to identify group differences in hemodynamic waveforms and response distributions associated with face and emotion processing. The results indicate similarities of waveforms, but differences in the magnitude, spatial distribution, and timing of responses between groups. These early differences in local cortical regions and the hemodynamic response may, in turn, contribute to differences in patterns of functional connectivity.

  8. Facial emotion recognition in euthymic patients with bipolar disorder and their unaffected first-degree relatives.

    Science.gov (United States)

    de Brito Ferreira Fernandes, Francy; Gigante, Alexandre Duarte; Berutti, Mariangeles; Amaral, José Antônio; de Almeida, Karla Mathias; de Almeida Rocca, Cristiana Castanho; Lafer, Beny; Nery, Fabiano Gonçalves

    2016-07-01

    Facial emotion recognition (FER) is an important task associated with social cognition because facial expression is a significant source of non-verbal information that guides interpersonal relationships. Increasing evidence suggests that bipolar disorder (BD) patients present deficits in FER and these deficits may be present in individuals at high genetic risk for BD. The aim of this study was to evaluate the occurrence of FER deficits in euthymic BD patients, their first-degree relatives, and healthy controls (HC) and to consider if these deficits might be regarded as an endophenotype candidate for BD. We studied 23 patients with DSM-IV BD type I, 22 first-degree relatives of these patients, and 27 HC. We used the Penn Emotion Recognition Tests to evaluate tasks of FER, emotion discrimination, and emotional acuity. Patients were recruited from outpatient facilities at the Institute of Psychiatry of the University of Sao Paulo Medical School, or from the community through media advertisements, had to be euthymic, with age above 18years old and a diagnosis of DSM-IV BD type I. Euthymic BD patients presented significantly fewer correct responses for fear, and significantly increased time to response to recognize happy faces when compared with HC, but not when compared with first-degree relatives. First-degree relatives did not significantly differ from HC on any of the emotion recognition tasks. Our results suggest that deficits in FER are present in euthymic patients, but not in subjects at high genetic risk for BD. Thus, we have not found evidence to consider FER as an endophenotype candidate for BD. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. An Efficient Multimodal 2D + 3D Feature-based Approach to Automatic Facial Expression Recognition

    KAUST Repository

    Li, Huibin

    2015-07-29

    We present a fully automatic multimodal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU-3DFE database. Our approach combines multi-order gradient-based local texture and shape descriptors in order to achieve efficiency and robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar-CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are used to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both feature-level and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU-3DFE benchmark to compare our approach to the state-of-the-art ones. Our multimodal feature-based approach outperforms the others by achieving an average recognition accuracy of 86.32%. Moreover, a good generalization ability is shown on the Bosphorus database.

  10. Forensic facial approximation assessment: can application of different average facial tissue depth data facilitate recognition and establish acceptable level of resemblance?

    Science.gov (United States)

    Herrera, Lara Maria; Strapasson, Raíssa Ananda Paim; da Silva, Jorge Vicente Lopes; Melani, Rodolfo Francisco Haltenhoff

    2016-09-01

    Facial soft tissue thicknesses (FSTT) are important guidelines for modeling faces from skull. Amid so many FSTT data, Forensic artists have to make a subjective choice of a dataset that best meets their needs. This study investigated the performance of four FSTT datasets in the recognition and resemblance of Brazilian living individuals and the performance of assessors in recognizing people, according to sex and knowledge on Human Anatomy and Forensic Dentistry. Sixteen manual facial approximations (FAs) were constructed using three-dimensional (3D) prototypes of skulls (targets). The American method was chosen for the construction of the faces. One hundred and twenty participants evaluated all FAs by means of recognition and resemblance tests. This study showed higher proportions of recognition by FAs conducted with FSTT data from cadavers compared with those conducted with medical imaging data. Targets were also considered more similar to FAs conducted with FSTT data from cadavers. Nose and face shape, respectively, were considered the most similar regions to targets. The sex of assessors (male and female) and the knowledge on Human Anatomy and Forensic Dentistry did not play a determinant role to reach greater recognition rates. It was possible to conclude that FSTT data obtained from imaging may not facilitate recognition and establish acceptable level of resemblance. Grouping FSTT data by regions of the face, as proposed in this paper, may contribute to more accurate FAs.

  11. Brain Network Involved in the Recognition of Facial Expressions of Emotion in the Early Blind

    Directory of Open Access Journals (Sweden)

    Ryo Kitada

    2011-10-01

    Full Text Available Previous studies suggest that the brain network responsible for the recognition of facial expressions of emotion (FEEs begins to emerge early in life. However, it has been unclear whether visual experience of faces is necessary for the development of this network. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI experiments to test the hypothesis that the brain network underlying the recognition of FEEs is not dependent on visual experience of faces. Early-blind, late-blind and sighted subjects participated in the psychophysical experiment. Regardless of group, subjects haptically identified basic FEEs at above-chance levels, without any feedback training. In the subsequent fMRI experiment, the early-blind and sighted subjects haptically identified facemasks portraying three different FEEs and casts of three different shoe types. The sighted subjects also completed a visual task that compared the same stimuli. Within the brain regions activated by the visually-identified FEEs (relative to shoes, haptic identification of FEEs (relative to shoes by the early-blind and sighted individuals activated the posterior middle temporal gyrus adjacent to the superior temporal sulcus, the inferior frontal gyrus, and the fusiform gyrus. Collectively, these results suggest that the brain network responsible for FEE recognition can develop without any visual experience of faces.

  12. Are event-related potentials to dynamic facial expressions of emotion related to individual differences in the accuracy of processing facial expressions and identity?

    Science.gov (United States)

    Recio, Guillermo; Wilhelm, Oliver; Sommer, Werner; Hildebrandt, Andrea

    2017-04-01

    Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain-behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = -.51) and memory (r = -.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.

  13. Learning weighted sparse representation of encoded facial normal information for expression-robust 3D face recognition

    KAUST Repository

    Li, Huibin

    2011-10-01

    This paper proposes a novel approach for 3D face recognition by learning weighted sparse representation of encoded facial normal information. To comprehensively describe 3D facial surface, three components, in X, Y, and Z-plane respectively, of normal vector are encoded locally to their corresponding normal pattern histograms. They are finally fed to a sparse representation classifier enhanced by learning based spatial weights. Experimental results achieved on the FRGC v2.0 database prove that the proposed encoded normal information is much more discriminative than original normal information. Moreover, the patch based weights learned using the FRGC v1.0 and Bosphorus datasets also demonstrate the importance of each facial physical component for 3D face recognition. © 2011 IEEE.

  14. Deficits in Facial Expression Recognition in Male Adolescents with Early-Onset or Adolescence-Onset Conduct Disorder

    Science.gov (United States)

    Fairchild, Graeme; Van Goozen, Stephanie H. M.; Calder, Andrew J.; Stollery, Sarah J.; Goodyer, Ian M.

    2009-01-01

    Background: We examined whether conduct disorder (CD) is associated with deficits in facial expression recognition and, if so, whether these deficits are specific to the early-onset form of CD, which emerges in childhood. The findings could potentially inform the developmental taxonomic theory of antisocial behaviour, which suggests that…

  15. Deficits in Facial Expression Recognition in Male Adolescents with Early-Onset or Adolescence-Onset Conduct Disorder

    Science.gov (United States)

    Fairchild, Graeme; Van Goozen, Stephanie H. M.; Calder, Andrew J.; Stollery, Sarah J.; Goodyer, Ian M.

    2009-01-01

    Background: We examined whether conduct disorder (CD) is associated with deficits in facial expression recognition and, if so, whether these deficits are specific to the early-onset form of CD, which emerges in childhood. The findings could potentially inform the developmental taxonomic theory of antisocial behaviour, which suggests that…

  16. Processing faces and facial expressions.

    Science.gov (United States)

    Posamentier, Mette T; Abdi, Hervé

    2003-09-01

    This paper reviews processing of facial identity and expressions. The issue of independence of these two systems for these tasks has been addressed from different approaches over the past 25 years. More recently, neuroimaging techniques have provided researchers with new tools to investigate how facial information is processed in the brain. First, findings from "traditional" approaches to identity and expression processing are summarized. The review then covers findings from neuroimaging studies on face perception, recognition, and encoding. Processing of the basic facial expressions is detailed in light of behavioral and neuroimaging data. Whereas data from experimental and neuropsychological studies support the existence of two systems, the neuroimaging literature yields a less clear picture because it shows considerable overlap in activation patterns in response to the different face-processing tasks. Further, activation patterns in response to facial expressions support the notion of involved neural substrates for processing different facial expressions.

  17. A new look at emotion perception: Concepts speed and shape facial emotion recognition.

    Science.gov (United States)

    Nook, Erik C; Lindquist, Kristen A; Zaki, Jamil

    2015-10-01

    Decades ago, the "New Look" movement challenged how scientists thought about vision by suggesting that conceptual processes shape visual perceptions. Currently, affective scientists are likewise debating the role of concepts in emotion perception. Here, we utilized a repetition-priming paradigm in conjunction with signal detection and individual difference analyses to examine how providing emotion labels-which correspond to discrete emotion concepts-affects emotion recognition. In Study 1, pairing emotional faces with emotion labels (e.g., "sad") increased individuals' speed and sensitivity in recognizing emotions. Additionally, individuals with alexithymia-who have difficulty labeling their own emotions-struggled to recognize emotions based on visual cues alone, but not when emotion labels were provided. Study 2 replicated these findings and further demonstrated that emotion concepts can shape perceptions of facial expressions. Together, these results suggest that emotion perception involves conceptual processing. We discuss the implications of these findings for affective, social, and clinical psychology.

  18. 人脸表情识别研究进展%Research Advance of Facial Expression Recognition

    Institute of Scientific and Technical Information of China (English)

    黄建; 李文书; 高玉娟

    2016-01-01

    人脸表情识别(Facial Expression Recognition,FER)是计算机视觉、机器学习、人工智能等领域的重要研究方向,目前已经成为国内外学者的研究热点.介绍了FER系统流程,总结了表情特征提取和表情分类的常用方法以及近年来国内外学者对这些方法的改进,并对这些方法的优缺点进行比较.最后,对目前FER研究的难点问题进行了分析,并对FER未来的发展方向进行展望.

  19. Childhood Facial Recognition Predicts Adolescent Symptom Severity in Autism Spectrum Disorder.

    Science.gov (United States)

    Eussen, Mart L J M; Louwerse, Anneke; Herba, Catherine M; Van Gool, Arthur R; Verheij, Fop; Verhulst, Frank C; Greaves-Lord, Kirstin

    2015-06-01

    Limited accuracy and speed in facial recognition (FR) and in the identification of facial emotions (IFE) have been shown in autism spectrum disorders (ASD). This study aimed at evaluating the predictive value of atypicalities in FR and IFE for future symptom severity in children with ASD. Therefore we performed a seven-year follow-up study in 87 children with ASD. FR and IFE were assessed in childhood (T1: age 6-12) using the Amsterdam Neuropsychological Tasks (ANT). Symptom severity was assessed using the Autism Diagnostic Observation Schedule (ADOS) in childhood and again seven years later during adolescence (T2: age 12-19). Multiple regression analyses were performed to investigate whether FR and IFE in childhood predicted ASD symptom severity in adolescence, while controlling for ASD symptom severity in childhood. We found that more accurate FR significantly predicted lower adolescent ASD symptom severity scores (ΔR(2) = .09), even when controlling for childhood ASD symptom severity. IFE was not a significant predictor of ASD symptom severity in adolescence. From these results it can be concluded, that in children with ASD the accuracy of FR in childhood is a relevant predictor of ASD symptom severity in adolescence. Test results on FR in children with ASD may have prognostic value regarding later symptom severity.

  20. Learning Expressionlets via Universal Manifold Model for Dynamic Facial Expression Recognition

    Science.gov (United States)

    Liu, Mengyi; Shan, Shiguang; Wang, Ruiping; Chen, Xilin

    2016-12-01

    Facial expression is temporally dynamic event which can be decomposed into a set of muscle motions occurring in different facial regions over various time intervals. For dynamic expression recognition, two key issues, temporal alignment and semantics-aware dynamic representation, must be taken into account. In this paper, we attempt to solve both problems via manifold modeling of videos based on a novel mid-level representation, i.e. \\textbf{expressionlet}. Specifically, our method contains three key stages: 1) each expression video clip is characterized as a spatial-temporal manifold (STM) formed by dense low-level features; 2) a Universal Manifold Model (UMM) is learned over all low-level features and represented as a set of local modes to statistically unify all the STMs. 3) the local modes on each STM can be instantiated by fitting to UMM, and the corresponding expressionlet is constructed by modeling the variations in each local mode. With above strategy, expression videos are naturally aligned both spatially and temporally. To enhance the discriminative power, the expressionlet-based STM representation is further processed with discriminant embedding. Our method is evaluated on four public expression databases, CK+, MMI, Oulu-CASIA, and FERA. In all cases, our method outperforms the known state-of-the-art by a large margin.

  1. Geometric Feature-Based Facial Expression Recognition in Image Sequences Using Multi-Class AdaBoost and Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Joonwhoan Lee

    2013-06-01

    Full Text Available Facial expressions are widely used in the behavioral interpretation of emotions, cognitive science, and social interactions. In this paper, we present a novel method for fully automatic facial expression recognition in facial image sequences. As the facial expression evolves over time facial landmarks are automatically tracked in consecutive video frames, using displacements based on elastic bunch graph matching displacement estimation. Feature vectors from individual landmarks, as well as pairs of landmarks tracking results are extracted, and normalized, with respect to the first frame in the sequence. The prototypical expression sequence for each class of facial expression is formed, by taking the median of the landmark tracking results from the training facial expression sequences. Multi-class AdaBoost with dynamic time warping similarity distance between the feature vector of input facial expression and prototypical facial expression, is used as a weak classifier to select the subset of discriminative feature vectors. Finally, two methods for facial expression recognition are presented, either by using multi-class AdaBoost with dynamic time warping, or by using support vector machine on the boosted feature vectors. The results on the Cohn-Kanade (CK+ facial expression database show a recognition accuracy of 95.17% and 97.35% using multi-class AdaBoost and support vector machines, respectively.

  2. Social identity and the recognition of creativity in groups

    NARCIS (Netherlands)

    Adarves-Yorno, Inmaculada; Postmes, Tom; Haslam, S. Alexander

    2006-01-01

    This paper develops an analysis of creativity that is informed by the social identity approach. Two studies are reported that support this analysis. Study I (N = 73) manipulated social identity salience and the content of group norms. The group norm was either conservative (i.e. promoted no change)

  3. Second Career Teachers and (Mis)Recognitions of Professional Identities

    Science.gov (United States)

    Nielsen, Ann

    2016-01-01

    Since the late 1980s there has been an increase of "second career teachers" (SCTs), professionals that switch careers to become teachers. Little is known about SCTs and their sense of professional identity. Building from Pierre Bourdieu's concepts of power and cultural capital, the professional identities of teachers were examined…

  4. Socio-demographic and Clinical Correlates of Facial Expression Recognition Disorder in the Euthymic Phase of Bipolar Patients

    Science.gov (United States)

    Moriano, Christian; Farruggio, Lisa; Jover, Frédéric

    2016-01-01

    Objective: Bipolar patients show social cognitive disorders. The objective of this study is to review facial expression recognition (FER) disorders in bipolar patients (BP) and explore clinical heterogeneity factors that could affect them in the euthymic phase: socio-demographic level, clinical and changing characteristics of the disorder, history of suicide attempt, and abuse. Method: Thirty-four euthymic bipolar patients and 29 control subjects completed a computer task of explicit facial expression recognition and were clinically evaluated. Results: Compared with control subjects, BP patients show: a decrease in fear, anger, and disgust recognition; an extended reaction time for disgust, surprise and neutrality recognition; confusion between fear and surprise, anger and disgust, disgust and sadness, sadness and neutrality. In BP patients, age negatively affects anger and neutrality recognition, as opposed to education level which positively affects recognizing these emotions. The history of patient abuse negatively affects surprise and disgust recognition, and the number of suicide attempts negatively affects disgust and anger recognition. Conclusions: Cognitive heterogeneity in euthymic phase BP patients is affected by several factors inherent to bipolar disorder complexity that should be considered in social cognition study. PMID:27310226

  5. Socio-demographic and Clinical Correlates of Facial Expression Recognition Disorder in the Euthymic Phase of Bipolar Patients.

    Science.gov (United States)

    Iakimova, Galina; Moriano, Christian; Farruggio, Lisa; Jover, Frédéric

    2016-10-01

    Bipolar patients show social cognitive disorders. The objective of this study is to review facial expression recognition (FER) disorders in bipolar patients (BP) and explore clinical heterogeneity factors that could affect them in the euthymic phase: socio-demographic level, clinical and changing characteristics of the disorder, history of suicide attempt, and abuse. Thirty-four euthymic bipolar patients and 29 control subjects completed a computer task of explicit facial expression recognition and were clinically evaluated. Compared with control subjects, BP patients show: a decrease in fear, anger, and disgust recognition; an extended reaction time for disgust, surprise and neutrality recognition; confusion between fear and surprise, anger and disgust, disgust and sadness, sadness and neutrality. In BP patients, age negatively affects anger and neutrality recognition, as opposed to education level which positively affects recognizing these emotions. The history of patient abuse negatively affects surprise and disgust recognition, and the number of suicide attempts negatively affects disgust and anger recognition. Cognitive heterogeneity in euthymic phase BP patients is affected by several factors inherent to bipolar disorder complexity that should be considered in social cognition study. © The Author(s) 2016.

  6. Identity recognition in response to different levels of genetic relatedness in commercial soya bean

    Science.gov (United States)

    Van Acker, Rene; Rajcan, Istvan; Swanton, Clarence J.

    2017-01-01

    Identity recognition systems allow plants to tailor competitive phenotypes in response to the genetic relatedness of neighbours. There is limited evidence for the existence of recognition systems in crop species and whether they operate at a level that would allow for identification of different degrees of relatedness. Here, we test the responses of commercial soya bean cultivars to neighbours of varying genetic relatedness consisting of other commercial cultivars (intraspecific), its wild progenitor Glycine soja, and another leguminous species Phaseolus vulgaris (interspecific). We found, for the first time to our knowledge, that a commercial soya bean cultivar, OAC Wallace, showed identity recognition responses to neighbours at different levels of genetic relatedness. OAC Wallace showed no response when grown with other commercial soya bean cultivars (intra-specific neighbours), showed increased allocation to leaves compared with stems with wild soya beans (highly related wild progenitor species), and increased allocation to leaves compared with stems and roots with white beans (interspecific neighbours). Wild soya bean also responded to identity recognition but these responses involved changes in biomass allocation towards stems instead of leaves suggesting that identity recognition responses are species-specific and consistent with the ecology of the species. In conclusion, elucidating identity recognition in crops may provide further knowledge into mechanisms of crop competition and the relationship between crop density and yield. PMID:28280587

  7. Analysis of differences between Western and East-Asian faces based on facial region segmentation and PCA for facial expression recognition

    Science.gov (United States)

    Benitez-Garcia, Gibran; Nakamura, Tomoaki; Kaneko, Masahide

    2017-01-01

    Darwin was the first one to assert that facial expressions are innate and universal, which are recognized across all cultures. However, recent some cross-cultural studies have questioned this assumed universality. Therefore, this paper presents an analysis of the differences between Western and East-Asian faces of the six basic expressions (anger, disgust, fear, happiness, sadness and surprise) focused on three individual facial regions of eyes-eyebrows, nose and mouth. The analysis is conducted by applying PCA for two feature extraction methods: appearance-based by using the pixel intensities of facial parts, and geometric-based by handling 125 feature points from the face. Both methods are evaluated using 4 standard databases for both racial groups and the results are compared with a cross-cultural human study applied to 20 participants. Our analysis reveals that differences between Westerns and East-Asians exist mainly on the regions of eyes-eyebrows and mouth for expressions of fear and disgust respectively. This work presents important findings for a better design of automatic facial expression recognition systems based on the difference between two racial groups.

  8. Facial affect recognition in body dysmorphic disorder versus obsessive-compulsive disorder: An eye-tracking study.

    Science.gov (United States)

    Toh, Wei Lin; Castle, David J; Rossell, Susan L

    2015-10-01

    Body dysmorphic disorder (BDD) is characterised by repetitive behaviours and/or mental acts occurring in response to preoccupations with perceived defects or flaws in physical appearance (American Psychiatric Association, 2013). This study aimed to investigate facial affect recognition in BDD using an integrated eye-tracking paradigm. Participants were 21 BDD patients, 19 obsessive-compulsive disorder (OCD) patients and 21 healthy controls (HC), who were age-, sex-, and IQ-matched. Stimuli were from the Pictures of Facial Affect (Ekman & Friesen, 1975), and outcome measures were affect recognition accuracy as well as spatial and temporal scanpath parameters. Relative to OCD and HC groups, BDD patients demonstrated significantly poorer facial affect perception and an angry recognition bias. An atypical scanning strategy encompassing significantly more blinks, fewer fixations of extended mean durations, higher mean saccade amplitudes, and less visual attention devoted to salient facial features was found. Patients with BDD were substantially impaired in the scanning of faces, and unable to extract affect-related information, likely indicating deficits in basic perceptual operations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Psychopathy and facial emotion recognition ability in patients with bipolar affective disorder with or without delinquent behaviors.

    Science.gov (United States)

    Demirel, Husrev; Yesilbas, Dilek; Ozver, Ismail; Yuksek, Erhan; Sahin, Feyzi; Aliustaoglu, Suheyla; Emul, Murat

    2014-04-01

    It is well known that patients with bipolar disorder are more prone to violence and have more criminal behaviors than general population. A strong relationship between criminal behavior and inability to empathize and imperceptions to other person's feelings and facial expressions increases the risk of delinquent behaviors. In this study, we aimed to investigate the deficits of facial emotion recognition ability in euthymic bipolar patients who committed an offense and compare with non-delinquent euthymic patients with bipolar disorder. Fifty-five euthymic patients with delinquent behaviors and 54 non-delinquent euthymic bipolar patients as a control group were included in the study. Ekman's Facial Emotion Recognition Test, sociodemographic data, Hare Psychopathy Checklist, Hamilton Depression Rating Scale and Young Mania Rating Scale were applied to both groups. There were no significant differences between case and control groups in the meaning of average age, gender, level of education, mean age onset of disease and suicide attempt (p>0.05). The three types of most committed delinquent behaviors in patients with euthymic bipolar disorder were as follows: injury (30.8%), threat or insult (20%) and homicide (12.7%). The best accurate percentage of identified facial emotion was "happy" (>99%, for both) while the worst misidentified facial emotion was "fear" in both groups (fear expressions was significantly worse in the case group than in the control group (pfear, disgusted and angry expressions had been significantly longer in the case group than in the control group (pfearful and modestly anger facial emotions and need some more time to response facial emotions even in remission. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Emotion recognition in pictures of facial affect: Is there a difference between forensic and non-forensic patients with schizophrenia?

    Directory of Open Access Journals (Sweden)

    Wiebke Wolfkühler

    Full Text Available Background and Objectives: Abundant research has demonstrated that patients with schizophrenia have difficulties in recognizing the emotional content in facial expressions. However, there is a paucity of studies on emotion recognition in schizophrenia patients with a history of violent behavior compared to patients without a criminal record. Methods: Emotion recognition skills were examined in thirty-three forensic patients with schizophrenia. In addition, executive function and psychopathology was assessed. Results were compared to a group of 38 schizophrenia patients in regular psychiatric care and to a healthy control group. Results: Both patient groups performed more poorly on almost all tasks compared to controls. However, in the forensic group the recognition of the expression of disgust was preserved. When the excitement factor of the Positive and Negative Syndrome Scale was co-varied out, forensic patients outperformed the non-forensic patient group on emotion recognition across modalities. Conclusions: The superior recognition of disgust could be uniquely associated with delinquent behavior.

  11. [Emotional facial recognition difficulties as primary deficit in children with attention deficit hyperactivity disorder: a systematic review].

    Science.gov (United States)

    Rodrigo-Ruiz, D; Perez-Gonzalez, J C; Cejudo, J

    2017-08-16

    It has recently been warned that children with attention deficit hyperactivity disorder (ADHD) show a deficit in emotional competence and emotional intelligence, specifically in their ability to emotional recognition. A systematic review of the scientific literature in reference to the emotional recognition of facial expressions in children with ADHD is presented in order to establish or rule the existence of emotional deficits as primary dysfunction in this disorder and, where appropriate, the effect size of the differences against normal development or neurotypical children. The results reveal the recent interest in the issue and the lack of information. Although there is no complete agreement, most of the studies show that emotional recognition of facial expressions is affected in children with ADHD, showing them significantly less accurate than children from control groups in recognizing emotions communicated through facial expressions. A part of these studies make comparisons on the recognition of different discrete emotions; having observed that children with ADHD tend to a greater difficulty recognizing negative emotions, especially anger, fear, and disgust. These results have direct implications for the educational and clinical diagnosis of ADHD; and for the educational intervention for children with ADHD, emotional education might entail an advantageous aid.

  12. Voice identity recognition: functional division of the right STS and its behavioral relevance.

    Science.gov (United States)

    Schall, Sonja; Kiebel, Stefan J; Maess, Burkhard; von Kriegstein, Katharina

    2015-02-01

    The human voice is the primary carrier of speech but also a fingerprint for person identity. Previous neuroimaging studies have revealed that speech and identity recognition is accomplished by partially different neural pathways, despite the perceptual unity of the vocal sound. Importantly, the right STS has been implicated in voice processing, with different contributions of its posterior and anterior parts. However, the time point at which vocal and speech processing diverge is currently unknown. Also, the exact role of the right STS during voice processing is so far unclear because its behavioral relevance has not yet been established. Here, we used the high temporal resolution of magnetoencephalography and a speech task control to pinpoint transient behavioral correlates: we found, at 200 msec after stimulus onset, that activity in right anterior STS predicted behavioral voice recognition performance. At the same time point, the posterior right STS showed increased activity during voice identity recognition in contrast to speech recognition whereas the left mid STS showed the reverse pattern. In contrast to the highly speech-sensitive left STS, the current results highlight the right STS as a key area for voice identity recognition and show that its anatomical-functional division emerges around 200 msec after stimulus onset. We suggest that this time point marks the speech-independent processing of vocal sounds in the posterior STS and their successful mapping to vocal identities in the anterior STS.

  13. Fighting identity theft with advances in fingerprint recognition

    CSIR Research Space (South Africa)

    Mathekga, D

    2015-10-01

    Full Text Available The ease with which the green South African ID book could be forged has led to many instances of identity fraud, costing retail businesses millions in lost revenue on fraudulently created credit accounts. This has led the government, through...

  14. Social and attention-to-detail subclusters of autistic traits differentially predict looking at eyes and face identity recognition ability.

    Science.gov (United States)

    Davis, Joshua; McKone, Elinor; Zirnsak, Marc; Moore, Tirin; O'Kearney, Richard; Apthorp, Deborah; Palermo, Romina

    2017-02-01

    This study distinguished between different subclusters of autistic traits in the general population and examined the relationships between these subclusters, looking at the eyes of faces, and the ability to recognize facial identity. Using the Autism Spectrum Quotient (AQ) measure in a university-recruited sample, we separate the social aspects of autistic traits (i.e., those related to communication and social interaction; AQ-Social) from the non-social aspects, particularly attention-to-detail (AQ-Attention). We provide the first evidence that these social and non-social aspects are associated differentially with looking at eyes: While AQ-Social showed the commonly assumed tendency towards reduced looking at eyes, AQ-Attention was associated with increased looking at eyes. We also report that higher attention-to-detail (AQ-Attention) was then indirectly related to improved face recognition, mediated by increased number of fixations to the eyes during face learning. Higher levels of socially relevant autistic traits (AQ-Social) trended in the opposite direction towards being related to poorer face recognition (significantly so in females on the Cambridge Face Memory Test). There was no evidence of any mediated relationship between AQ-Social and face recognition via reduced looking at the eyes. These different effects of AQ-Attention and AQ-Social suggest face-processing studies in Autism Spectrum Disorder might similarly benefit from considering symptom subclusters. Additionally, concerning mechanisms of face recognition, our results support the view that more looking at eyes predicts better face memory.

  15. Callous-unemotional traits and empathy deficits: Mediating effects of affective perspective-taking and facial emotion recognition.

    Science.gov (United States)

    Lui, Joyce H L; Barry, Christopher T; Sacco, Donald F

    2016-09-01

    Although empathy deficits are thought to be associated with callous-unemotional (CU) traits, findings remain equivocal and little is known about what specific abilities may underlie these purported deficits. Affective perspective-taking (APT) and facial emotion recognition may be implicated, given their independent associations with both empathy and CU traits. The current study examined how CU traits relate to cognitive and affective empathy and whether APT and facial emotion recognition mediate these relations. Participants were 103 adolescents (70 males) aged 16-18 attending a residential programme. CU traits were negatively associated with cognitive and affective empathy to a similar degree. The association between CU traits and affective empathy was partially mediated by APT. Results suggest that assessing mechanisms that may underlie empathic deficits, such as perspective-taking, may be important for youth with CU traits and may inform targets of intervention.

  16. Wanting it Too Much: An Inverse Relation Between Social Motivation and Facial Emotion Recognition in Autism Spectrum Disorder

    OpenAIRE

    Garman, Heather D.; Spaulding, Christine J.; Webb, Sara Jane; Mikami, Amori Yee; Morris, James P.; Lerner, Matthew D

    2016-01-01

    This study examined social motivation and early-stage face perception as frameworks for understanding impairments in facial emotion recognition (FER) in a well-characterized sample of youth with autism spectrum disorders (ASD). Early-stage face perception (N170 event-related potential latency) was recorded while participants completed a standardized FER task, while social motivation was obtained via parent report. Participants with greater social motivation exhibited poorer FER, while those w...

  17. Precentral and inferior prefrontal hypoactivation during facial emotion recognition in patients with schizophrenia: A functional near-infrared spectroscopy study.

    Science.gov (United States)

    Watanuki, Toshio; Matsuo, Koji; Egashira, Kazuteru; Nakashima, Mami; Harada, Kenichiro; Nakano, Masayuki; Matsubara, Toshio; Takahashi, Kanji; Watanabe, Yoshifumi

    2016-01-01

    Although patients with schizophrenia demonstrate abnormal processing of emotional face recognition, the neural substrates underlying this process remain unclear. We previously showed abnormal fronto-temporal function during facial expression of emotions, and cognitive inhibition in patients with schizophrenia using functional near-infrared spectroscopy (fNIRS). The aim of the current study was to use fNIRS to identify which brain regions involved in recognizing emotional faces are impaired in patients with schizophrenia, and to determine the neural substrates underlying the response to emotional facial expressions per se, and to facial expressions with cognitive inhibition. We recruited 19 patients with schizophrenia and 19 healthy controls, statistically matched on age, sex, and premorbid IQ. Brain function was measured by fNIRS during emotional face assessment and face identification tasks. Patients with schizophrenia showed lower activation of the right precentral and inferior frontal areas during the emotional face task compared to controls. Further, patients with schizophrenia were slower and less accurate in completing tasks compared to healthy participants. Decreasing performance was associated with increasing severity of the disease. Our present and prior studies suggest that the impaired behavioral performance in schizophrenia is associated with different mechanisms for processing emotional facial expressions versus facial expressions combined with cognitive inhibition.

  18. Survey of spontaneous facial expression recognition%自发表情识别方法综述

    Institute of Scientific and Technical Information of China (English)

    何俊; 何忠文; 蔡建峰; 房灵芝

    2016-01-01

    This paper introduced the actuality and the developing level of spontaneous facial expression recognition at the present time,and paid particular attention to the key technology on the research of spontaneous facial expression recognition. This paper aimed to arouse researchers’attention and interests into this new field,to participate in the study of the spontane-ous facial expression recognition problems actively,and to achieve more successes correlated to this problem.%介绍了目前自发表情识别研究的现状与发展水平,详细阐述了自发表情识别研究的内容和方法,以及自发表情识别研究的关键技术,旨在引起研究者对此新兴研究方向的关注与兴趣,从而积极参与对自发表情识别问题的研究,并推动与此相关问题的进展。

  19. Facial emotion recognition in alcohol and substance use disorders: A meta-analysis.

    Science.gov (United States)

    Castellano, Filippo; Bartoli, Francesco; Crocamo, Cristina; Gamba, Giulia; Tremolada, Martina; Santambrogio, Jacopo; Clerici, Massimo; Carrà, Giuseppe

    2015-12-01

    People with alcohol and substance use disorders (AUDs/SUDs) show worse facial emotion recognition (FER) than controls, though magnitude and potential moderators remain unknown. The aim of this meta-analysis was to estimate the association between AUDs, SUDs and FER impairment. Electronic databases were searched through April 2015. Pooled analyses were based on standardized mean differences between index and control groups with 95% confidence intervals, weighting each study with random effects inverse variance models. Risk of publication bias and role of potential moderators, including task type, were explored. Nineteen of 70 studies assessed for eligibility met the inclusion criteria, comprising 1352 individuals, of whom 714 (53%) had AUDs or SUDs. The association between substance related disorders and FER performance showed an effect size of -0.67 (-0.95, -0.39), and -0.65 (-0.93, -0.37) for AUDs and SUDs, respectively. There was no publication bias and subgroup and sensitivity analyses based on potential moderators confirmed core results. Future longitudinal research should confirm these findings, clarifying the role of specific clinical issues of AUDs and SUDs.

  20. Facial recognition trial: biometric identification of non-compliant subjects using CCTV

    Science.gov (United States)

    Best, Tim

    2007-10-01

    LogicaCMG were provided with an opportunity to deploy a facial recognition system in a realistic scenario. 12 cameras were installed at an international airport covering all entrances to the immigration hall. The evaluation took place over several months with numerous adjustments to both the hardware (i.e. cameras, servers and capture cards) and software. The learning curve has been very steep but a stage has now been reached where both LogicaCMG and the client are confident that, subject to the right environmental conditions (lighting and camera location) an effective system can be defined with a high probability of successful detection of the target individual, with minimal false alarms. To the best of our knowledge, results with a >90% detection rate, of non-compliant subjects 'at range' has not been achieved anywhere else. This puts this location at the forefront of capability in this area. The results achieved demonstrate that, given optimised conditions, it is possible to achieve a long range biometric identification of a non compliant subject, with a high rate of success.

  1. Using sensors and facial expression recognition to personalize emotion learning for autistic children.

    Science.gov (United States)

    Gay, Valerie; Leijdekkers, Peter; Wong, Frederick

    2013-01-01

    This paper describes CaptureMyEmotion, an app for smartphones and tablets which uses wireless sensors to capture physiological data together with facial expression recognition to provide a very personalized way to help autistic children identify and understand their emotions. Many apps are targeting autistic children and their carer, but none of the existing apps uses the full potential offered by mobile technology and sensors to overcome one of autistic children's main difficulty: the identification and expression of emotions. CaptureMyEmotion enables autistic children to capture photos, videos or sounds, and identify the emotion they felt while taking the picture. Simultaneously, a self-portrait of the child is taken, and the app measures the arousal and stress levels using wireless sensors. The app uses the self-portrait to provide a better estimate of the emotion felt by the child. The app has the potential to help autistic children understand their emotions and it gives the carer insight into the child's emotions and offers a means to discuss the child's feelings.

  2. The Recognition of Identical Ligands by Unrelated Proteins.

    Science.gov (United States)

    Barelier, Sarah; Sterling, Teague; O'Meara, Matthew J; Shoichet, Brian K

    2015-12-18

    The binding of drugs and reagents to off-targets is well-known. Whereas many off-targets are related to the primary target by sequence and fold, many ligands bind to unrelated pairs of proteins, and these are harder to anticipate. If the binding site in the off-target can be related to that of the primary target, this challenge resolves into aligning the two pockets. However, other cases are possible: the ligand might interact with entirely different residues and environments in the off-target, or wholly different ligand atoms may be implicated in the two complexes. To investigate these scenarios at atomic resolution, the structures of 59 ligands in 116 complexes (62 pairs in total), where the protein pairs were unrelated by fold but bound an identical ligand, were examined. In almost half of the pairs, the ligand interacted with unrelated residues in the two proteins (29 pairs), and in 14 of the pairs wholly different ligand moieties were implicated in each complex. Even in those 19 pairs of complexes that presented similar environments to the ligand, ligand superposition rarely resulted in the overlap of related residues. There appears to be no single pattern-matching "code" for identifying binding sites in unrelated proteins that bind identical ligands, though modeling suggests that there might be a limited number of different patterns that suffice to recognize different ligand functional groups.

  3. Associations between facial emotion recognition, cognition and alexithymia in patients with schizophrenia: comparison of photographic and virtual reality presentations.

    Science.gov (United States)

    Gutiérrez-Maldonado, J; Rus-Calafell, M; Márquez-Rejón, S; Ribas-Sabaté, J

    2012-01-01

    Emotion recognition is known to be impaired in schizophrenia patients. Although cognitive deficits and symptomatology have been associated with this impairment there are other patient characteristics, such as alexithymia, which have not been widely explored. Emotion recognition is normally assessed by means of photographs, although they do not reproduce the dynamism of human expressions. Our group has designed and validated a virtual reality (VR) task to assess and subsequently train schizophrenia patients. The present study uses this VR task to evaluate the impaired recognition of facial affect in patients with schizophrenia and to examine its association with cognitive deficit and the patients' inability to express feelings. Thirty clinically stabilized outpatients with a well-established diagnosis of schizophrenia or schizoaffective disorder were assessed in neuropsychological, symptomatic and affective domains. They then performed the facial emotion recognition task. Statistical analyses revealed no significant differences between the two presentation conditions (photographs and VR) in terms of overall errors made. However, anger and fear were easier to recognize in VR than in photographs. Moreover, strong correlations were found between psychopathology and the errors made.

  4. Emotion recognition from facial expressions: a normative study of the Ekman 60-Faces Test in the Italian population.

    Science.gov (United States)

    Dodich, Alessandra; Cerami, Chiara; Canessa, Nicola; Crespi, Chiara; Marcone, Alessandra; Arpone, Marta; Realmuto, Sabrina; Cappa, Stefano F

    2014-07-01

    The Ekman 60-Faces (EK-60F) Test is a well-known neuropsychological tool assessing emotion recognition from facial expressions. It is the most employed task for research purposes in psychiatric and neurological disorders, including neurodegenerative diseases, such as the behavioral variant of Frontotemporal Dementia (bvFTD). Despite its remarkable usefulness in the social cognition research field, to date, there are still no normative data for the Italian population, thus limiting its application in a clinical context. In this study, we report procedures and normative data for the Italian version of the test. A hundred and thirty-two healthy Italian participants aged between 20 and 79 years with at least 5 years of education were recruited on a voluntary basis. They were administered the EK-60F Test from the Ekman and Friesen series of Pictures of Facial Affect after a preliminary semantic recognition test of the six basic emotions (i.e., anger, fear, sadness, happiness, disgust, surprise). Data were analyzed according to the Capitani procedure [1]. The regression analysis revealed significant effects of demographic variables, with younger, more educated, female subjects showing higher scores. Normative data were then applied to a sample of 15 bvFTD patients which showed global impaired performance in the task, consistently with the clinical condition. We provided EK-60F Test normative data for the Italian population allowing the investigation of global emotion recognition ability as well as selective impairment of basic emotions recognition, both for clinical and research purposes.

  5. Temporal voice areas exist in autism spectrum disorder but are dysfunctional for voice identity recognition

    Science.gov (United States)

    Borowiak, Kamila; von Kriegstein, Katharina

    2016-01-01

    The ability to recognise the identity of others is a key requirement for successful communication. Brain regions that respond selectively to voices exist in humans from early infancy on. Currently, it is unclear whether dysfunction of these voice-sensitive regions can explain voice identity recognition impairments. Here, we used two independent functional magnetic resonance imaging studies to investigate voice processing in a population that has been reported to have no voice-sensitive regions: autism spectrum disorder (ASD). Our results refute the earlier report that individuals with ASD have no responses in voice-sensitive regions: Passive listening to vocal, compared to non-vocal, sounds elicited typical responses in voice-sensitive regions in the high-functioning ASD group and controls. In contrast, the ASD group had a dysfunction in voice-sensitive regions during voice identity but not speech recognition in the right posterior superior temporal sulcus/gyrus (STS/STG)—a region implicated in processing complex spectrotemporal voice features and unfamiliar voices. The right anterior STS/STG correlated with voice identity recognition performance in controls but not in the ASD group. The findings suggest that right STS/STG dysfunction is critical for explaining voice recognition impairments in high-functioning ASD and show that ASD is not characterised by a general lack of voice-sensitive responses. PMID:27369067

  6. Individual Differences in the Speed of Facial Emotion Recognition Show Little Specificity but Are Strongly Related with General Mental Speed: Psychometric, Neural and Genetic Evidence.

    Science.gov (United States)

    Liu, Xinyang; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Cai, Xinxia; Wilhelm, Oliver

    2017-01-01

    Facial identity and facial expression processing are crucial socio-emotional abilities but seem to show only limited psychometric uniqueness when the processing speed is considered in easy tasks. We applied a comprehensive measurement of processing speed and contrasted performance specificity in socio-emotional, social and non-social stimuli from an individual differences perspective. Performance in a multivariate task battery could be best modeled by a general speed factor and a first-order factor capturing some specific variance due to processing emotional facial expressions. We further tested equivalence of the relationships between speed factors and polymorphisms of dopamine and serotonin transporter genes. Results show that the speed factors are not only psychometrically equivalent but invariant in their relation with the Catechol-O-Methyl-Transferase (COMT) Val158Met polymorphism. However, the 5-HTTLPR/rs25531 serotonin polymorphism was related with the first-order factor of emotion perception speed, suggesting a specific genetic correlate of processing emotions. We further investigated the relationship between several components of event-related brain potentials with psychometric abilities, and tested emotion specific individual differences at the neurophysiological level. Results revealed swifter emotion perception abilities to go along with larger amplitudes of the P100 and the Early Posterior Negativity (EPN), when emotion processing was modeled on its own. However, after partialling out the shared variance of emotion perception speed with general processing speed-related abilities, brain-behavior relationships did not remain specific for emotion. Together, the present results suggest that speed abilities are strongly interrelated but show some specificity for emotion processing speed at the psychometric level. At both genetic and neurophysiological levels, emotion specificity depended on whether general cognition is taken into account or not. These

  7. Individual Differences in the Speed of Facial Emotion Recognition Show Little Specificity but Are Strongly Related with General Mental Speed: Psychometric, Neural and Genetic Evidence

    Directory of Open Access Journals (Sweden)

    Xinyang Liu

    2017-08-01

    Full Text Available Facial identity and facial expression processing are crucial socio-emotional abilities but seem to show only limited psychometric uniqueness when the processing speed is considered in easy tasks. We applied a comprehensive measurement of processing speed and contrasted performance specificity in socio-emotional, social and non-social stimuli from an individual differences perspective. Performance in a multivariate task battery could be best modeled by a general speed factor and a first-order factor capturing some specific variance due to processing emotional facial expressions. We further tested equivalence of the relationships between speed factors and polymorphisms of dopamine and serotonin transporter genes. Results show that the speed factors are not only psychometrically equivalent but invariant in their relation with the Catechol-O-Methyl-Transferase (COMT Val158Met polymorphism. However, the 5-HTTLPR/rs25531 serotonin polymorphism was related with the first-order factor of emotion perception speed, suggesting a specific genetic correlate of processing emotions. We further investigated the relationship between several components of event-related brain potentials with psychometric abilities, and tested emotion specific individual differences at the neurophysiological level. Results revealed swifter emotion perception abilities to go along with larger amplitudes of the P100 and the Early Posterior Negativity (EPN, when emotion processing was modeled on its own. However, after partialling out the shared variance of emotion perception speed with general processing speed-related abilities, brain-behavior relationships did not remain specific for emotion. Together, the present results suggest that speed abilities are strongly interrelated but show some specificity for emotion processing speed at the psychometric level. At both genetic and neurophysiological levels, emotion specificity depended on whether general cognition is taken into account

  8. Individual Differences in the Speed of Facial Emotion Recognition Show Little Specificity but Are Strongly Related with General Mental Speed: Psychometric, Neural and Genetic Evidence

    Science.gov (United States)

    Liu, Xinyang; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Cai, Xinxia; Wilhelm, Oliver

    2017-01-01

    Facial identity and facial expression processing are crucial socio-emotional abilities but seem to show only limited psychometric uniqueness when the processing speed is considered in easy tasks. We applied a comprehensive measurement of processing speed and contrasted performance specificity in socio-emotional, social and non-social stimuli from an individual differences perspective. Performance in a multivariate task battery could be best modeled by a general speed factor and a first-order factor capturing some specific variance due to processing emotional facial expressions. We further tested equivalence of the relationships between speed factors and polymorphisms of dopamine and serotonin transporter genes. Results show that the speed factors are not only psychometrically equivalent but invariant in their relation with the Catechol-O-Methyl-Transferase (COMT) Val158Met polymorphism. However, the 5-HTTLPR/rs25531 serotonin polymorphism was related with the first-order factor of emotion perception speed, suggesting a specific genetic correlate of processing emotions. We further investigated the relationship between several components of event-related brain potentials with psychometric abilities, and tested emotion specific individual differences at the neurophysiological level. Results revealed swifter emotion perception abilities to go along with larger amplitudes of the P100 and the Early Posterior Negativity (EPN), when emotion processing was modeled on its own. However, after partialling out the shared variance of emotion perception speed with general processing speed-related abilities, brain-behavior relationships did not remain specific for emotion. Together, the present results suggest that speed abilities are strongly interrelated but show some specificity for emotion processing speed at the psychometric level. At both genetic and neurophysiological levels, emotion specificity depended on whether general cognition is taken into account or not. These

  9. Camouflaging Facial Emphysema: a new syndrome.

    Science.gov (United States)

    Martínez-Carpio, Pedro A; del Campillo, Ángel F Bedoya; Leal, María Jesús; Lleopart, Núria; Marrón, María T; Trelles, Mario A

    2012-10-10

    Camouflaging Facial Emphysema, as is defined in this paper, is the result of a simple technique used by the patient to deform his face in order to prevent recognition at a police identity parade. The patient performs two punctures in the mucosa at the rear of the upper lip and, after several Valsalva manoeuvres, manages to deform his face in less than 15 min by inducing subcutaneous facial emphysema. The examination shows an accumulation of air in the face, with no laterocervical, mediastinal or thoracic affectations. The swelling is primarily observed in the eyelids and the orbital and zygomatic regions, whereas it is less prominent in other areas of the face. Patients therefore manage to avoid recognition in properly conducted police identity parades. Only isolated cases of self-induced facial emphysema have been reported to date among psychiatric patients and prison inmates. However, the facial emphysema herein described exhibits specific characteristics of significant medical, deontological, social, police-related, and legal implications.

  10. Sibling recognition and the development of identity: intersubjective consequences of sibling differentiation in the sister relationship.

    Science.gov (United States)

    Vivona, Jeanine M

    2013-01-01

    Identity is, among other things, a means to adapt to the others around whom one must fit. Psychoanalytic theory has highlighted ways in which the child fits in by emulating important others, especially through identification. Alternately, the child may fit into the family and around important others through differentiation, an unconscious process that involves developing or accentuating qualities and desires in oneself that are expressly different from the perceived qualities of another person and simultaneously suppressing qualities and desires that are perceived as similar. With two clinical vignettes centered on the sister relationship, the author demonstrates that recognition of identity differences that result from sibling differentiation carries special significance in the sibling relationship and simultaneously poses particular intersubjective challenges. To the extent that the spotlight of sibling recognition delimits the lateral space one may occupy, repeatedly frustrated desires for sibling recognition may have enduring consequences for one's sense of self-worth and expectations of relationships with peers and partners.

  11. Facial emotion recognition in childhood-onset bipolar I disorder: an evaluation of developmental differences between youths and adults

    Science.gov (United States)

    Wegbreit, Ezra; Weissman, Alexandra B; Cushman, Grace K; Puzia, Megan E; Kim, Kerri L; Leibenluft, Ellen; Dickstein, Daniel P

    2015-01-01

    Objectives Bipolar disorder (BD) is a severe mental illness with high healthcare costs and poor outcomes. Increasing numbers of youths are diagnosed with BD, and many adults with BD report their symptoms started in childhood, suggesting BD can be a developmental disorder. Studies advancing our understanding of BD have shown alterations in facial emotion recognition in both children and adults with BD compared to healthy comparison (HC) participants, but none have evaluated the development of these deficits. To address this, we examined the effect of age on facial emotion recognition in a sample that included children and adults with confirmed childhood-onset type-I BD, with the adults having been diagnosed and followed since childhood by the Course and Outcome in Bipolar Youth study. Methods Using the Diagnostic Analysis of Non-Verbal Accuracy, we compared facial emotion recognition errors among participants with BD (n = 66; ages 7–26 years) and HC participants (n = 87; ages 7–25 years). Complementary analyses investigated errors for child and adult faces. Results A significant diagnosis-by-age interaction indicated that younger BD participants performed worse than expected relative to HC participants their own age. The deficits occurred for both child and adult faces and were particularly strong for angry child faces, which were most often mistaken as sad. Our results were not influenced by medications, comorbidities/substance use, or mood state/global functioning. Conclusions Younger individuals with BD are worse than their peers at this important social skill. This deficit may be an important developmentally salient treatment target, i.e., for cognitive remediation to improve BD youths’ emotion recognition abilities. PMID:25951752

  12. Facial emotion recognition in childhood-onset bipolar I disorder: an evaluation of developmental differences between youths and adults.

    Science.gov (United States)

    Wegbreit, Ezra; Weissman, Alexandra B; Cushman, Grace K; Puzia, Megan E; Kim, Kerri L; Leibenluft, Ellen; Dickstein, Daniel P

    2015-08-01

    Bipolar disorder (BD) is a severe mental illness with high healthcare costs and poor outcomes. Increasing numbers of youths are diagnosed with BD, and many adults with BD report that their symptoms started in childhood, suggesting that BD can be a developmental disorder. Studies advancing our understanding of BD have shown alterations in facial emotion recognition both in children and adults with BD compared to healthy comparison (HC) participants, but none have evaluated the development of these deficits. To address this, we examined the effect of age on facial emotion recognition in a sample that included children and adults with confirmed childhood-onset type-I BD, with the adults having been diagnosed and followed since childhood by the Course and Outcome in Bipolar Youth study. Using the Diagnostic Analysis of Non-Verbal Accuracy, we compared facial emotion recognition errors among participants with BD (n = 66; ages 7-26 years) and HC participants (n = 87; ages 7-25 years). Complementary analyses investigated errors for child and adult faces. A significant diagnosis-by-age interaction indicated that younger BD participants performed worse than expected relative to HC participants their own age. The deficits occurred both for child and adult faces and were particularly strong for angry child faces, which were most often mistaken as sad. Our results were not influenced by medications, comorbidities/substance use, or mood state/global functioning. Younger individuals with BD are worse than their peers at this important social skill. This deficit may be an important developmentally salient treatment target - that is, for cognitive remediation to improve BD youths' emotion recognition abilities. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    Science.gov (United States)

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  14. From Facial Emotional Recognition Abilities to Emotional Attribution: A Study in Down Syndrome

    Science.gov (United States)

    Hippolyte, Loyse; Barisnikov, Koviljka; Van der Linden, Martial; Detraux, Jean-Jacques

    2009-01-01

    Facial expression processing and the attribution of facial emotions to a context were investigated in adults with Down syndrome (DS) in two experiments. Their performances were compared with those of a child control group matched for receptive vocabulary. The ability to process faces without emotional content was controlled for, and no differences…

  15. Recognition, Expression, and Understanding Facial Expressions of Emotion in Adolescents with Nonverbal and General Learning Disabilities

    Science.gov (United States)

    Bloom, Elana; Heath, Nancy

    2010-01-01

    Children with nonverbal learning disabilities (NVLD) have been found to be worse at recognizing facial expressions than children with verbal learning disabilities (LD) and without LD. However, little research has been done with adolescents. In addition, expressing and understanding facial expressions is yet to be studied among adolescents with LD…

  16. From Facial Emotional Recognition Abilities to Emotional Attribution: A Study in Down Syndrome

    Science.gov (United States)

    Hippolyte, Loyse; Barisnikov, Koviljka; Van der Linden, Martial; Detraux, Jean-Jacques

    2009-01-01

    Facial expression processing and the attribution of facial emotions to a context were investigated in adults with Down syndrome (DS) in two experiments. Their performances were compared with those of a child control group matched for receptive vocabulary. The ability to process faces without emotional content was controlled for, and no differences…

  17. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    Science.gov (United States)

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  18. A New Technology:3 D Facial Recognition%面部识别新技术:三维面部识别

    Institute of Scientific and Technical Information of China (English)

    王玥; 李丽娜

    2014-01-01

    3D face recognition is a reliable technology t in the field of facial recognition, and has been widely applied in sensitive places. This paper describes the development of 3D facial recognition, technical characteristics, difficulties and hotspots within the application. The future development of 3D facial recognition is also prospected in the end.%三维面部识别是面部识别领域中一项识别率可靠的技术,已经在国内外一些敏感应用场所得到了推广使用。文章介绍了三维面部识别的发展、技术特点、难点与应用热点,最后对三维面部识别的未来进行了展望。

  19. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind.

    Science.gov (United States)

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J; Sadato, Norihiro

    2013-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.

  20. Early visual experience and the recognition of basic facial expressions: Involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind

    Directory of Open Access Journals (Sweden)

    Ryo eKitada

    2013-01-01

    Full Text Available Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus and posterior superior temporal sulcus in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early-blind individuals. In a psychophysical experiment, both early-blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control. The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.

  1. Italian normative data and validation of two neuropsychological tests of face recognition: Benton Facial Recognition Test and Cambridge Face Memory Test.

    Science.gov (United States)

    Albonico, Andrea; Malaspina, Manuela; Daini, Roberta

    2017-06-21

    The Benton Facial Recognition Test (BFRT) and Cambridge Face Memory Test (CFMT) are two of the most common tests used to assess face discrimination and recognition abilities and to identify individuals with prosopagnosia. However, recent studies highlighted that participant-stimulus match ethnicity, as much as gender, has to be taken into account in interpreting results from these tests. Here, in order to obtain more appropriate normative data for an Italian sample, the CFMT and BFRT were administered to a large cohort of young adults. We found that scores from the BFRT are not affected by participants' gender and are only slightly affected by participant-stimulus ethnicity match, whereas both these factors seem to influence the scores of the CFMT. Moreover, the inclusion of a sample of individuals with suspected face recognition impairment allowed us to show that the use of more appropriate normative data can increase the BFRT efficacy in identifying individuals with face discrimination impairments; by contrast, the efficacy of the CFMT in classifying individuals with a face recognition deficit was confirmed. Finally, our data show that the lack of inversion effect (the difference between the total score of the upright and inverted versions of the CFMT) could be used as further index to assess congenital prosopagnosia. Overall, our results confirm the importance of having norms derived from controls with a similar experience of faces as the "potential" prosopagnosic individuals when assessing face recognition abilities.

  2. Empathy and recognition of facial expressions of emotion in sex offenders, non-sex offenders and normal controls.

    Science.gov (United States)

    Gery, Isabelle; Miljkovitch, Raphaële; Berthoz, Sylvie; Soussignan, Robert

    2009-02-28

    Research conducted on empathy and emotional recognition in sex offenders is contradictory. The present study was aimed to clarify this issue by controlling for some affective and social variables (depression, anxiety, and social desirability) that are presumed to influence emotional and empathic measures, using a staged multicomponent model of empathy. Incarcerated sex offenders (child molesters), incarcerated non-sex offenders, and non-offender controls (matched for age, gender, and education level) performed a recognition task of facial expressions of basic emotions that varied in intensity, and completed various self-rating scales designed to assess distinct components of empathy (perspective taking, affective empathy, empathy concern, and personal distress), as well as depression, anxiety, and social desirability. Sex offenders were less accurate than the other participants in recognizing facial expressions of anger, disgust, surprise and fear, with problems in confusing fear with surprise, and disgust with anger. Affective empathy was the only component that discriminated sex offenders from non-sex offenders and was correlated with accuracy recognition of emotional expressions. Although our findings must be replicated with a larger number of participants, they support the view that sex offenders might have impairments in the decoding of some emotional cues conveyed by the conspecifics' face, which could have an impact on affective empathy.

  3. Effect of facial expressions on student's comprehension recognition in virtual educational environments.

    Science.gov (United States)

    Sathik, Mohamed; Jonathan, Sofia G

    2013-01-01

    The scope of this research is to examine whether facial expression of the students is a tool for the lecturer to interpret comprehension level of students in virtual classroom and also to identify the impact of facial expressions during lecture and the level of comprehension shown by these expressions. Our goal is to identify physical behaviours of the face that are linked to emotional states, and then to identify how these emotional states are linked to student's comprehension. In this work, the effectiveness of a student's facial expressions in non-verbal communication in a virtual pedagogical environment was investigated first. Next, the specific elements of learner's behaviour for the different emotional states and the relevant facial expressions signaled by the action units were interpreted. Finally, it focused on finding the impact of the relevant facial expression on the student's comprehension. Experimentation was done through survey, which involves quantitative observations of the lecturers in the classroom in which the behaviours of students were recorded and statistically analyzed. The result shows that facial expression is the most frequently used nonverbal communication mode by the students in the virtual classroom and facial expressions of the students are significantly correlated to their emotions which helps to recognize their comprehension towards the lecture.

  4. Recognition of Facial Expression Using Eigenvector Based Distributed Features and Euclidean Distance Based Decision Making Technique

    Directory of Open Access Journals (Sweden)

    Jeemoni Kalita

    2013-03-01

    Full Text Available In this paper, an Eigenvector based system has been presented to recognize facial expressions from digital facial images. In the approach, firstly the images were acquired and cropping of five significant portions from the image was performed to extract and store the Eigenvectors specific to the expressions. The Eigenvectors for the test images were also computed, and finally the input facial image was recognized when similarity was obtained by calculating the minimum Euclidean distance between the test image and the different expressions.

  5. 人脸表情识别方法及展望%Methods and outlook for facial expression recognition

    Institute of Scientific and Technical Information of China (English)

    李菊霞

    2009-01-01

    人脸表情识(facial expression recognition,简称FER)作为智能化人机交互技术中的一个重要组成部分,有着广泛的应用前景和潜在的市场价值,近年来得到了广泛的关注,涌现出许多新方法.本文综述了近年来国内外人脸表情识别(FER)的研究进展并对未来的人脸表情识别发展方向进行了展望.

  6. Recognition of facial expressions of different emotional intensities in patients with frontotemporal lobar degeneration

    NARCIS (Netherlands)

    Kessels, Roy P. C.; Gerritsen, Lotte; Montagne, Barbara; Ackl, Nibal; Diehl, Janine; Danek, Adrian

    2007-01-01

    Behavioural problems are a key feature of frontotemporal lobar degeneration (FTLD). Also, FTLD patients show impairments in emotion processing. Specifically, the perception of negative emotional facial expressions is affected. Generally, however, negative emotional expressions are regarded as more d

  7. Verbal bias in recognition of facial emotions in children with Asperger syndrome.

    Science.gov (United States)

    Grossman, J B; Klin, A; Carter, A S; Volkmar, F R

    2000-03-01

    Thirteen children and adolescents with diagnoses of Asperger syndrome (AS) were matched with 13 nonautistic control children on chronological age and verbal IQ. They were tested on their ability to recognize simple facial emotions, as well as facial emotions paired with matching, mismatching, or irrelevant verbal labels. There were no differences between the groups at recognizing simple emotions but the Asperger group performed significantly worse than the control group at recognizing emotions when faces were paired with mismatching words (but not with matching or irrelevant words). The results suggest that there are qualitative differences from nonclinical populations in how children with AS process facial expressions. When presented with a more demanding affective processing task, individuals with AS showed a bias towards visual-verbal over visual-affective information (i.e., words over faces). Thus, children with AS may be utilizing compensatory strategies, such as verbal mediation, to process facial expressions of emotion.

  8. 5-HTTLPR modulates the recognition accuracy and exploration of emotional facial expressions

    OpenAIRE

    2014-01-01

    Individual genetic differences in the serotonin transporter-linked polymorphic region (5-HTTLPR) have been associated with variations in the sensitivity to social and emotional cues as well as altered amygdala reactivity to facial expressions of emotion. Amygdala activation has further been shown to trigger gaze changes towards diagnostically relevant facial features. The current study examined whether altered socio-emotional reactivity in variants of the 5-HTTLPR promoter polymorphism reflec...

  9. 5-HTTLPR modulates the recognition accuracy and exploration of emotional facial expressions

    Directory of Open Access Journals (Sweden)

    Sabrina eBoll

    2014-07-01

    Full Text Available Individual genetic differences in the serotonin transporter-linked polymorphic region (5-HTTLPR have been associated with variations in the sensitivity to social and emotional cues as well as altered amygdala reactivity to facial expressions of emotion. Amygdala activation has further been shown to trigger gaze changes towards diagnostically relevant facial features. The current study examined whether altered socio-emotional reactivity in variants of the 5-HTTLPR promoter polymorphism reflects individual differences in attending to diagnostic features of facial expressions. For this purpose, visual exploration of emotional facial expressions was compared between a low (n=39 and a high (n=40 5-HTT expressing group of healthy human volunteers in an eye tracking paradigm. Emotional faces were presented while manipulating the initial fixation such that saccadic changes towards the eyes and towards the mouth could be identified. We found that the low versus the high 5-HTT group demonstrated greater accuracy with regard to emotion classifications, particularly when faces were presented for a longer duration. No group differences in gaze orientation towards diagnostic facial features could be observed. However, participants in the low 5-HTT group exhibited more and faster fixation changes for certain emotions when faces were presented for a longer duration and overall face fixation times were reduced for this genotype group. These results suggest that the 5-HTT gene influences social perception by modulating the general vigilance to social cues rather than selectively affecting the pre-attentive detection of diagnostic facial features.

  10. Saturation of recognition elements blocks evolution of new tRNA identities.

    Science.gov (United States)

    Saint-Léger, Adélaïde; Bello, Carla; Dans, Pablo D; Torres, Adrian Gabriel; Novoa, Eva Maria; Camacho, Noelia; Orozco, Modesto; Kondrashov, Fyodor A; Ribas de Pouplana, Lluís

    2016-04-01

    Understanding the principles that led to the current complexity of the genetic code is a central question in evolution. Expansion of the genetic code required the selection of new transfer RNAs (tRNAs) with specific recognition signals that allowed them to be matured, modified, aminoacylated, and processed by the ribosome without compromising the fidelity or efficiency of protein synthesis. We show that saturation of recognition signals blocks the emergence of new tRNA identities and that the rate of nucleotide substitutions in tRNAs is higher in species with fewer tRNA genes. We propose that the growth of the genetic code stalled because a limit was reached in the number of identity elements that can be effectively used in the tRNA structure.

  11. Facial Expression Recognition Based on RGB-D%基于RGB-D的人脸表情识别研究

    Institute of Scientific and Technical Information of China (English)

    吴会霞; 陶青川; 龚雪友

    2016-01-01

    针对二维人脸表情识别在复杂光照及光照条件较差时,识别准确率较低的问题,提出一种基于RGB-D 的融合多分类器的面部表情识别的算法。该算法首先在图像的彩色信息(Y、Cr、Q)和深度信息(D)上分别提取其LPQ,Gabor,LBP 以及HOG 特征信息,并对提取的高维特征信息做线性降维(PCA)及特征空间转换(LDA),而后用最近邻分类法得到各表情弱分类器,并用AdaBoost 算法权重分配弱分类器从而生成强分类器,最后用Bayes 进行多分类器的融合,统计输出平均识别率。在具有复杂光照条件变化的人脸表情库CurtinFaces 和KinectFaceDB 上,该算法平均识别率最高达到98.80%。试验结果表明:比较于单独彩色图像的表情识别算法,深度信息的融合能够更加明显的提升面部表情识别的识别率,并且具有一定的应用价值。%For two-dimensional facial expression recognition complex when poor lighting and illumination conditions, a low recognition rate of prob-lem, proposes a facial expression recognition algorithm based on multi-feature RGB-D fusion. Extracts their LPQ, Gabor, LBP and HOG feature information in image color information(Y, Cr, Q) and depth information (D) on, and the extraction of high-dimensional feature in-formation does linear dimensionality reduction (PCA) and feature space conversion (LDA), and then gives each face of weak classifiers nearest neighbor classification, and with AdaBoost algorithm weight distribution of weak classifiers to generate strong classifier, and finally with Bayes multi-classifier fusion, statistical output average recognition rate. With complex changes in lighting conditions and facial ex-pression libraries CurtinFaces KinectFaceDB, the algorithm average recognition rate of up to 98.80%. The results showed that: compared to a separate color image expression recognition algorithm, the fusion depth information can be more

  12. Extremely Preterm-Born Infants Demonstrate Different Facial Recognition Processes at 6-10 Months of Corrected Age.

    Science.gov (United States)

    Frie, Jakob; Padilla, Nelly; Ådén, Ulrika; Lagercrantz, Hugo; Bartocci, Marco

    2016-05-01

    To compare cortical hemodynamic responses to known and unknown facial stimuli between infants born extremely preterm and term-born infants, and to correlate the responses of the extremely preterm-born infants to regional cortical volumes at term-equivalent age. We compared 27 infants born extremely preterm (infrared spectroscopy. In the preterm group, we also performed structural brain magnetic resonance imaging and correlated regional cortical volumes to hemodynamic responses. The preterm-born infants demonstrated different cortical face recognition processes than the term-born infants. They had a significantly smaller hemodynamic response in the right frontotemporal areas while watching their mother's face (0.13 μmol/L vs 0.63 μmol/L; P recognition process compared with term-born infants. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Quality of life differences in patients with right- versus left-sided facial paralysis: Universal preference of right-sided human face recognition.

    Science.gov (United States)

    Ryu, Nam Gyu; Lim, Byung Woo; Cho, Jae Keun; Kim, Jin

    2016-09-01

    We investigated whether experiencing right- or left-sided facial paralysis would affect an individual's ability to recognize one side of the human face using hybrid hemi-facial photos by preliminary study. Further investigation looked at the relationship between facial recognition ability, stress, and quality of life. To investigate predominance of one side of the human face for face recognition, 100 normal participants (right-handed: n = 97, left-handed: n = 3, right brain dominance: n = 56, left brain dominance: n = 44) answered a questionnaire that included hybrid hemi-facial photos developed to determine decide superiority of one side for human face recognition. To determine differences of stress level and quality of life between individuals experiencing right- and left-sided facial paralysis, 100 patients (right side:50, left side:50, not including traumatic facial nerve paralysis) answered a questionnaire about facial disability index test and quality of life (SF-36 Korean version). Regardless of handedness or hemispheric dominance, the proportion of predominance of the right side in human face recognition was larger than the left side (71% versus 12%, neutral: 17%). Facial distress index of the patients with right-sided facial paralysis was lower than that of left-sided patients (68.8 ± 9.42 versus 76.4 ± 8.28), and the SF-36 scores of right-sided patients were lower than left-sided patients (119.07 ± 15.24 versus 123.25 ± 16.48, total score: 166). Universal preference for the right side in human face recognition showed worse psychological mood and social interaction in patients with right-side facial paralysis than left-sided paralysis. This information is helpful to clinicians in that psychological and social factors should be considered when treating patients with facial-paralysis. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  14. Perceived Parenting Mediates Serotonin Transporter Gene (5-HTTLPR) and Neural System Function during Facial Recognition: A Pilot Study.

    Science.gov (United States)

    Nishikawa, Saori; Toshima, Tamotsu; Kobayashi, Masao

    2015-01-01

    This study examined changes in prefrontal oxy-Hb levels measured by NIRS (Near-Infrared Spectroscopy) during a facial-emotion recognition task in healthy adults, testing a mediational/moderational model of these variables. Fifty-three healthy adults (male = 35, female = 18) aged between 22 to 37 years old (mean age = 24.05 years old) provided saliva samples, completed a EMBU questionnaire (Swedish acronym for Egna Minnen Beträffande Uppfostran [My memories of upbringing]), and participated in a facial-emotion recognition task during NIRS recording. There was a main effect of maternal rejection on RoxH (right frontal activation during an ambiguous task), and a gene × environment (G × E) interaction on RoxH, suggesting that individuals who carry the SL or LL genotype and who endorse greater perceived maternal rejection show less right frontal activation than SL/LL carriers with lower perceived maternal rejection. Finally, perceived parenting style played a mediating role in right frontal activation via the 5-HTTLPR genotype. Early-perceived parenting might influence neural activity in an uncertain situation i.e. rating ambiguous faces among individuals with certain genotypes. This preliminary study makes a small contribution to the mapping of an influence of gene and behaviour on the neural system. More such attempts should be made in order to clarify the links.

  15. Perceived Parenting Mediates Serotonin Transporter Gene (5-HTTLPR and Neural System Function during Facial Recognition: A Pilot Study.

    Directory of Open Access Journals (Sweden)

    Saori Nishikawa

    Full Text Available This study examined changes in prefrontal oxy-Hb levels measured by NIRS (Near-Infrared Spectroscopy during a facial-emotion recognition task in healthy adults, testing a mediational/moderational model of these variables. Fifty-three healthy adults (male = 35, female = 18 aged between 22 to 37 years old (mean age = 24.05 years old provided saliva samples, completed a EMBU questionnaire (Swedish acronym for Egna Minnen Beträffande Uppfostran [My memories of upbringing], and participated in a facial-emotion recognition task during NIRS recording. There was a main effect of maternal rejection on RoxH (right frontal activation during an ambiguous task, and a gene × environment (G × E interaction on RoxH, suggesting that individuals who carry the SL or LL genotype and who endorse greater perceived maternal rejection show less right frontal activation than SL/LL carriers with lower perceived maternal rejection. Finally, perceived parenting style played a mediating role in right frontal activation via the 5-HTTLPR genotype. Early-perceived parenting might influence neural activity in an uncertain situation i.e. rating ambiguous faces among individuals with certain genotypes. This preliminary study makes a small contribution to the mapping of an influence of gene and behaviour on the neural system. More such attempts should be made in order to clarify the links.

  16. 基于LBP和SVM决策树的人脸表情识别%Facial Expression Recognition Based on LBP and SVM Decision Tree

    Institute of Scientific and Technical Information of China (English)

    李扬; 郭海礁

    2014-01-01

    为了提高人脸表情识别的识别率,提出一种LBP和SVM决策树相结合的人脸表情识别算法。首先利用LBP算法将人脸表情图像转换为LBP特征谱,然后将LBP特征谱转换成LBP直方图特征序列,最后通过SVM决策树算法完成人脸表情的分类和识别,并且在JAFFE人脸表情库的识别中证明该算法的有效性。%In order to improve the recognition rate of facial expression, proposes a facial expression recognition algorithm based on a LBP and SVM decision tree. First facial expression image is converted to LBP characteristic spectrum using LBP algorithm, and then the LBP character-istic spectrum into LBP histogram feature sequence, finally completes the classification and recognition of facial expression by SVM deci-sion tree algorithm, and proves the effectiveness of the proposed method in the recognition of facial expression database in JAFFE.

  17. The Influence of Music on Facial Emotion Recognition in Children with Autism Spectrum Disorder and Neurotypical Children.

    Science.gov (United States)

    Brown, Laura S

    2017-03-01

    Children with autism spectrum disorder (ASD) often struggle with social skills, including the ability to perceive emotions based on facial expressions. Research evidence suggests that many individuals with ASD can perceive emotion in music. Examining whether music can be used to enhance recognition of facial emotion by children with ASD would inform development of music therapy interventions. The purpose of this study was to investigate the influence of music with a strong emotional valance (happy; sad) on children with ASD's ability to label emotions depicted in facial photographs, and their response time. Thirty neurotypical children and 20 children with high-functioning ASD rated expressions of happy, neutral, and sad in 30 photographs under two music listening conditions (sad music; happy music). During each music listening condition, participants rated the 30 images using a 7-point scale that ranged from very sad to very happy. Response time data were also collected across both conditions. A significant two-way interaction revealed that participants' ratings of happy and neutral faces were unaffected by music conditions, but sad faces were perceived to be sadder with sad music than with happy music. Across both conditions, neurotypical children rated the happy faces as happier and the sad faces as sadder than did participants with ASD. Response times of the neurotypical children were consistently shorter than response times of the children with ASD; both groups took longer to rate sad faces than happy faces. Response times of neurotypical children were generally unaffected by the valence of the music condition; however, children with ASD took longer to respond when listening to sad music. Music appears to affect perceptions of emotion in children with ASD, and perceptions of sad facial expressions seem to be more affected by emotionally congruent background music than are perceptions of happy or neutral faces.

  18. Deficits in facial emotion recognition indicate behavioral changes and impaired self-awareness after moderate to severe traumatic brain injury.

    Directory of Open Access Journals (Sweden)

    Jacoba M Spikman

    Full Text Available Traumatic brain injury (TBI is a leading cause of disability, specifically among younger adults. Behavioral changes are common after moderate to severe TBI and have adverse consequences for social and vocational functioning. It is hypothesized that deficits in social cognition, including facial affect recognition, might underlie these behavioral changes. Measurement of behavioral deficits is complicated, because the rating scales used rely on subjective judgement, often lack specificity and many patients provide unrealistically positive reports of their functioning due to impaired self-awareness. Accordingly, it is important to find performance based tests that allow objective and early identification of these problems. In the present study 51 moderate to severe TBI patients in the sub-acute and chronic stage were assessed with a test for emotion recognition (FEEST and a questionnaire for behavioral problems (DEX with a self and proxy rated version. Patients performed worse on the total score and on the negative emotion subscores of the FEEST than a matched group of 31 healthy controls. Patients also exhibited significantly more behavioral problems on both the DEX self and proxy rated version, but proxy ratings revealed more severe problems. No significant correlation was found between FEEST scores and DEX self ratings. However, impaired emotion recognition in the patients, and in particular of Sadness and Anger, was significantly correlated with behavioral problems as rated by proxies and with impaired self-awareness. This is the first study to find these associations, strengthening the proposed recognition of social signals as a condition for adequate social functioning. Hence, deficits in emotion recognition can be conceived as markers for behavioral problems and lack of insight in TBI patients. This finding is also of clinical importance since, unlike behavioral problems, emotion recognition can be objectively measured early after injury

  19. Deficits in facial emotion recognition indicate behavioral changes and impaired self-awareness after moderate to severe traumatic brain injury.

    Science.gov (United States)

    Spikman, Jacoba M; Milders, Maarten V; Visser-Keizer, Annemarie C; Westerhof-Evers, Herma J; Herben-Dekker, Meike; van der Naalt, Joukje

    2013-01-01

    Traumatic brain injury (TBI) is a leading cause of disability, specifically among younger adults. Behavioral changes are common after moderate to severe TBI and have adverse consequences for social and vocational functioning. It is hypothesized that deficits in social cognition, including facial affect recognition, might underlie these behavioral changes. Measurement of behavioral deficits is complicated, because the rating scales used rely on subjective judgement, often lack specificity and many patients provide unrealistically positive reports of their functioning due to impaired self-awareness. Accordingly, it is important to find performance based tests that allow objective and early identification of these problems. In the present study 51 moderate to severe TBI patients in the sub-acute and chronic stage were assessed with a test for emotion recognition (FEEST) and a questionnaire for behavioral problems (DEX) with a self and proxy rated version. Patients performed worse on the total score and on the negative emotion subscores of the FEEST than a matched group of 31 healthy controls. Patients also exhibited significantly more behavioral problems on both the DEX self and proxy rated version, but proxy ratings revealed more severe problems. No significant correlation was found between FEEST scores and DEX self ratings. However, impaired emotion recognition in the patients, and in particular of Sadness and Anger, was significantly correlated with behavioral problems as rated by proxies and with impaired self-awareness. This is the first study to find these associations, strengthening the proposed recognition of social signals as a condition for adequate social functioning. Hence, deficits in emotion recognition can be conceived as markers for behavioral problems and lack of insight in TBI patients. This finding is also of clinical importance since, unlike behavioral problems, emotion recognition can be objectively measured early after injury, allowing for early

  20. Acute effects of delta-9-tetrahydrocannabinol, cannabidiol and their combination on facial emotion recognition: a randomised, double-blind, placebo-controlled study in cannabis users.

    Science.gov (United States)

    Hindocha, Chandni; Freeman, Tom P; Schafer, Grainne; Gardener, Chelsea; Das, Ravi K; Morgan, Celia J A; Curran, H Valerie

    2015-03-01

    Acute administration of the primary psychoactive constituent of cannabis, Δ-9-tetrahydrocannabinol (THC), impairs human facial affect recognition, implicating the endocannabinoid system in emotional processing. Another main constituent of cannabis, cannabidiol (CBD), has seemingly opposite functional effects on the brain. This study aimed to determine the effects of THC and CBD, both alone and in combination on emotional facial affect recognition. 48 volunteers, selected for high and low frequency of cannabis use and schizotypy, were administered, THC (8mg), CBD (16mg), THC+CBD (8mg+16mg) and placebo, by inhalation, in a 4-way, double-blind, placebo-controlled crossover design. They completed an emotional facial affect recognition task including fearful, angry, happy, sad, surprise and disgust faces varying in intensity from 20% to 100%. A visual analogue scale (VAS) of feeling 'stoned' was also completed. In comparison to placebo, CBD improved emotional facial affect recognition at 60% emotional intensity; THC was detrimental to the recognition of ambiguous faces of 40% intensity. The combination of THC+CBD produced no impairment. Relative to placebo, both THC alone and combined THC+CBD equally increased feelings of being 'stoned'. CBD did not influence feelings of 'stoned'. No effects of frequency of use or schizotypy were found. In conclusion, CBD improves recognition of emotional facial affect and attenuates the impairment induced by THC. This is the first human study examining the effects of different cannabinoids on emotional processing. It provides preliminary evidence that different pharmacological agents acting upon the endocannabinoid system can both improve and impair recognition of emotional faces.

  1. Developmental Changes in the Primacy of Facial Cues for Emotion Recognition

    Science.gov (United States)

    Leitzke, Brian T.; Pollak, Seth D.

    2016-01-01

    There have been long-standing differences of opinion regarding the influence of the face relative to that of contextual information on how individuals process and judge facial expressions of emotion. However, developmental changes in how individuals use such information have remained largely unexplored and could be informative in attempting to…

  2. Recognition of Emotional and Nonemotional Facial Expressions: A Comparison between Williams Syndrome and Autism

    Science.gov (United States)

    Lacroix, Agnes; Guidetti, Michele; Roge, Bernadette; Reilly, Judy

    2009-01-01

    The aim of our study was to compare two neurodevelopmental disorders (Williams syndrome and autism) in terms of the ability to recognize emotional and nonemotional facial expressions. The comparison of these two disorders is particularly relevant to the investigation of face processing and should contribute to a better understanding of social…

  3. Recognition of Facial Expressions of Emotion in Adults with Down Syndrome

    Science.gov (United States)

    Virji-Babul, Naznin; Watt, Kimberley; Nathoo, Farouk; Johnson, Peter

    2012-01-01

    Research on facial expressions in individuals with Down syndrome (DS) has been conducted using photographs. Our goal was to examine the effect of motion on perception of emotional expressions. Adults with DS, adults with typical development matched for chronological age (CA), and children with typical development matched for developmental age (DA)…

  4. Recognition of Emotional and Nonemotional Facial Expressions: A Comparison between Williams Syndrome and Autism

    Science.gov (United States)

    Lacroix, Agnes; Guidetti, Michele; Roge, Bernadette; Reilly, Judy

    2009-01-01

    The aim of our study was to compare two neurodevelopmental disorders (Williams syndrome and autism) in terms of the ability to recognize emotional and nonemotional facial expressions. The comparison of these two disorders is particularly relevant to the investigation of face processing and should contribute to a better understanding of social…

  5. Recognition of Facial Expressions of Emotion in Adults with Down Syndrome

    Science.gov (United States)

    Virji-Babul, Naznin; Watt, Kimberley; Nathoo, Farouk; Johnson, Peter

    2012-01-01

    Research on facial expressions in individuals with Down syndrome (DS) has been conducted using photographs. Our goal was to examine the effect of motion on perception of emotional expressions. Adults with DS, adults with typical development matched for chronological age (CA), and children with typical development matched for developmental age (DA)…

  6. Developmental Changes in the Primacy of Facial Cues for Emotion Recognition

    Science.gov (United States)

    Leitzke, Brian T.; Pollak, Seth D.

    2016-01-01

    There have been long-standing differences of opinion regarding the influence of the face relative to that of contextual information on how individuals process and judge facial expressions of emotion. However, developmental changes in how individuals use such information have remained largely unexplored and could be informative in attempting to…

  7. Recognition of Facial Emotions among Maltreated Children with High Rates of Post-Traumatic Stress Disorder

    Science.gov (United States)

    Masten, Carrie L.; Guyer, Amanda E.; Hodgdon, Hilary B.; McClure, Erin B.; Charney, Dennis S.; Ernst, Monique; Kaufman, Joan; Pine, Daniel S.; Monk, Christopher S.

    2008-01-01

    Objective: The purpose of this study is to examine processing of facial emotions in a sample of maltreated children showing high rates of post-traumatic stress disorder (PTSD). Maltreatment during childhood has been associated independently with both atypical processing of emotion and the development of PTSD. However, research has provided little…

  8. Similar exemplar pooling processes underlie the learning of facial identity and handwriting style: Evidence from typical observers and individuals with Autism.

    Science.gov (United States)

    Ipser, Alberta; Ring, Melanie; Murphy, Jennifer; Gaigg, Sebastian B; Cook, Richard

    2016-05-01

    Considerable research has addressed whether the cognitive and neural representations recruited by faces are similar to those engaged by other types of visual stimuli. For example, research has examined the extent to which objects of expertise recruit holistic representation and engage the fusiform face area. Little is known, however, about the domain-specificity of the exemplar pooling processes thought to underlie the acquisition of familiarity with particular facial identities. In the present study we sought to compare observers' ability to learn facial identities and handwriting styles from exposure to multiple exemplars. Crucially, while handwritten words and faces differ considerably in their topographic form, both learning tasks share a common exemplar pooling component. In our first experiment, we find that typical observers' ability to learn facial identities and handwriting styles from exposure to multiple exemplars correlates closely. In our second experiment, we show that observers with Autism Spectrum Disorder (ASD) are impaired at both learning tasks. Our findings suggest that similar exemplar pooling processes are recruited when learning facial identities and handwriting styles. Models of exemplar pooling originally developed to explain face learning, may therefore offer valuable insights into exemplar pooling across a range of domains, extending beyond faces. Aberrant exemplar pooling, possibly resulting from structural differences in the inferior longitudinal fasciculus, may underlie difficulties recognising familiar faces often experienced by individuals with ASD, and leave observers overly reliant on local details present in particular exemplars.

  9. 精神分裂症面孔识别受损的研究进展%Research progress on facial recognition deficits of schizophrenia

    Institute of Scientific and Technical Information of China (English)

    徐骁; 谭淑平; 薛明明

    2015-01-01

    Social cognition is the key factor which can have an influence on and predict the functional outcome of schizophrenia.Facial processing and facial expression perception are the two core parts of the social cognitive function.In this review,we discussed facial recognition deficits of schizophrenia from both the emotional face and non-emotional face aspects.Moreover,we reviewed the underlying mechanism of facial recognition deficits in schizophrenia and summarized the latest research progress on facial recognition dysfunction in schizophrenic patients employed the eye movement technology.%社会认知功能是影响和预测精神分裂症功能结局的关键因素。而面孔加工及面孔表情感知是社会认知能力中的核心组成部分。本文从情绪及非情绪面孔识别两个方面对精神分裂症面孔识别的受损展开论述,并进一步探讨了精神分裂症面孔识别受损的认知神经机制及对利用眼动技术对精神分裂症面孔识别功能受损的最新研究进展进行了总结。

  10. Role of fusiform and anterior temporal cortical areas in facial recognition.

    Science.gov (United States)

    Nasr, Shahin; Tootell, Roger B H

    2012-11-15

    Recent fMRI studies suggest that cortical face processing extends well beyond the fusiform face area (FFA), including unspecified portions of the anterior temporal lobe. However, the exact location of such anterior temporal region(s), and their role during active face recognition, remain unclear. Here we demonstrate that (in addition to FFA) a small bilateral site in the anterior tip of the collateral sulcus ('AT'; the anterior temporal face patch) is selectively activated during recognition of faces but not houses (a non-face object). In contrast to the psychophysical prediction that inverted and contrast reversed faces are processed like other non-face objects, both FFA and AT (but not other visual areas) were also activated during recognition of inverted and contrast reversed faces. However, response accuracy was better correlated to recognition-driven activity in AT, compared to FFA. These data support a segregated, hierarchical model of face recognition processing, extending to the anterior temporal cortex.

  11. Research progress of facial expression recognition in children%儿童面部表情识别研究进展

    Institute of Scientific and Technical Information of China (English)

    王道阳; 殷欣

    2015-01-01

    Recognition of facial expressions is an important psychological and social skills, and facial expression recognition disorder has a significant impact on children's interpersonal and social interaction, especially for those with autism spectrum disorder. This paper discusses the research history, development process, influential factors, future development direction and research limitation of facial expression recognition, and describes the enlightenment to education. Besides, the facial expression recognition of children with autism spectrum disorder is also introduced.%对他人面部表情的识别是一种重要的心理能力和社交技巧, 其中面部表情识别障碍严重影响儿童的人际交往和社会互动, 尤其是自闭症谱系障碍的儿童. 主要探讨了面部表情识别的研究历史、 发展进程、 影响因素、 未来的发展方向、 存在的研究局限以及对教育的启示, 并且专门针对自闭症谱系障碍儿童的面部表情识别情况进行了论述.

  12. Arginine vasopressin 1a receptor RS3 promoter microsatellites in schizophrenia: a study of the effect of the "risk" allele on clinical symptoms and facial affect recognition.

    Science.gov (United States)

    Golimbet, Vera; Alfimova, Margarita; Abramova, Lilia; Kaleda, Vasily; Gritsenko, Inga

    2015-02-28

    We studied AVPR1A RS3 polymorphism in schizophrenic patients and controls. AVPR1A RS3 was not associated with schizophrenia. The allele 327bp implicated in autism and social behavior was associated with negative symptoms and tended to be linked to patient facial affect recognition suggesting its impact on schizophrenia social phenotypes.

  13. Spatio-Temporal Pain Recognition in CNN-based Super-Resolved Facial Images

    DEFF Research Database (Denmark)

    Bellantonio, Marco; Haque, Mohammad Ahsanul; Rodriguez, Pau

    2017-01-01

    to pain in each of the facial video frames, temporal axis information regarding to pain expression pattern in a subject video sequence, and variation of face resolution. We employed a combination of convolutional neural network and recurrent neural network to setup a deep hybrid pain detection framework......Automatic pain detection is a long expected solution to a prevalent medical problem of pain management. This is more relevant when the subject of pain is young children or patients with limited ability to communicate about their pain experience. Computer vision-based analysis of facial pain...... expression provides a way of efficient pain detection. When deep machine learning methods came into the scene, automatic pain detection exhibited even better performance. In this paper, we figured out three important factors to exploit in automatic pain detection: spatial information available regarding...

  14. Accuracy and reaction time in recognition of facial emotions in people with multiple sclerosis.

    Science.gov (United States)

    Parada-Fernández, Pamela; Oliva-Macías, Mireia; Amayra, Imanol; López-Paz, Juan F; Lázaro, Esther; Martínez, Óscar; Jometón, Amaia; Berrocoso, Sarah; García de Salazar, Héctor; Pérez, Manuel

    2015-11-16

    Introduccion. La expresion facial emocional constituye una guia basica en la interaccion social y, por lo tanto, las alteraciones en su expresion o reconocimiento implican una limitacion importante para la comunicacion. Por otro lado, el deterioro cognitivo y la presencia de sintomas depresivos, que se encuentran comunmente en los pacientes con esclerosis multiple, no se sabe como influyen en el reconocimiento emocional. Objetivo. Considerar la evaluacion del tiempo de reaccion y precision en la respuesta de reconocimiento de expresiones faciales de las personas afectadas por esclerosis multiple y valorar las posibles variables que pueden modular el reconocimiento de emociones, como la depresion y las funciones cognitivas. Sujetos y metodos. El estudio tiene un diseño no experimental transversal con una sola medicion. La muestra esta compuesta por 85 participantes, 45 con diagnostico de esclerosis multiple y 40 sujetos control. Resultados. Los sujetos con esclerosis multiple revelaban diferencias significativas tanto en el tiempo de reaccion y la precision de respuesta en pruebas neuropsicologicas en comparacion con el grupo control. Se identificaron modelos explicativos en el reconocimiento emocional. Conclusion. Los sujetos con esclerosis multiple se enfrentan a dificultades en el reconocimiento de emociones faciales, y se observaron diferencias en la memoria, atencion, velocidad de procesamiento y sintomatologia depresiva en relacion con el grupo control.

  15. Wanting it Too Much: An Inverse Relation Between Social Motivation and Facial Emotion Recognition in Autism Spectrum Disorder.

    Science.gov (United States)

    Garman, Heather D; Spaulding, Christine J; Webb, Sara Jane; Mikami, Amori Yee; Morris, James P; Lerner, Matthew D

    2016-12-01

    This study examined social motivation and early-stage face perception as frameworks for understanding impairments in facial emotion recognition (FER) in a well-characterized sample of youth with autism spectrum disorders (ASD). Early-stage face perception (N170 event-related potential latency) was recorded while participants completed a standardized FER task, while social motivation was obtained via parent report. Participants with greater social motivation exhibited poorer FER, while those with shorter N170 latencies exhibited better FER for child angry faces stimuli. Social motivation partially mediated the relationship between a faster N170 and better FER. These effects were all robust to variations in IQ, age, and ASD severity. These findings augur against theories implicating social motivation as uniformly valuable for individuals with ASD, and augment models suggesting a close link between early-stage face perception, social motivation, and FER in this population. Broader implications for models and development of FER in ASD are discussed.

  16. Using identity and recognition as a framework to understand and promote the resilience of caregiving children in western Kenya

    DEFF Research Database (Denmark)

    Skovdal, Morten; Andreouli, E.

    2011-01-01

    experience young caregiving. This paper seeks to further our understanding of caregiving children in Africa by looking at how local constructions of childhood can facilitate their agency and resilience, paying particular attention to the role of identity and recognition. The study involved 48 caregiving....... Their participation is encouraged by local understandings of childhood and recognition of their efforts, enabling the children to construct positive identities that enhance their resilience. The paper argues that the way in which caregiving children in Kenya respond to their circumstances is influenced by a social...... recognition of their activities and agency. This recognition, mediated by local representations of childhood, allows the children to construct positive social identities that facilitate resilience. We conclude that there is a need for policy and practice on young caregiving, in all countries and contexts...

  17. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    Science.gov (United States)

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms.

  18. Eye-Gaze Analysis of Facial Emotion Recognition and Expression in Adolescents with ASD.

    Science.gov (United States)

    Wieckowski, Andrea Trubanova; White, Susan W

    2017-01-01

    Impaired emotion recognition and expression in individuals with autism spectrum disorder (ASD) may contribute to observed social impairment. The aim of this study was to examine the role of visual attention directed toward nonsocial aspects of a scene as a possible mechanism underlying recognition and expressive ability deficiency in ASD. One recognition and two expression tasks were administered. Recognition was assessed in force-choice paradigm, and expression was assessed during scripted and free-choice response (in response to emotional stimuli) tasks in youth with ASD (n = 20) and an age-matched sample of typically developing youth (n = 20). During stimulus presentation prior to response in each task, participants' eye gaze was tracked. Youth with ASD were less accurate at identifying disgust and sadness in the recognition task. They fixated less to the eye region of stimuli showing surprise. A group difference was found during the free-choice response task, such that those with ASD expressed emotion less clearly but not during the scripted task. Results suggest altered eye gaze to the mouth region but not the eye region as a candidate mechanism for decreased ability to recognize or express emotion. Findings inform our understanding of the association between social attention and emotion recognition and expression deficits.

  19. FACIAL EXPRESSION RECOGNITION UNDER PARTIAL OCCLUSION%局部遮挡条件下的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    李蕊; 刘鹏宇; 贾克斌

    2016-01-01

    We propose a novel facial expression recognition method,which is based on Gabor filter and gray-level co-occurrence matrix, aimed at facial expression recognition under partial occlusion condition.We first design an approach to extract in blocks the Gabor feature statistics,which generates a low-dimensional Gabor feature vector.Then,taking into account the lack of association between pixels in blocked Gabor features,we introduce the gray-level co-occurrence matrix reflecting the distribution characteristics between locations of pixels into expression recognition field,so as to make up the deficiency caused by Gabor feature blocking processing.Finally,we apply the linear superimposition on the extracted low-dimensional Gabor feature vector and the texture feature of gray-level co-occurrence matrix,after Gaussian normalisation processing there generates a set of low-dimensional feature vectors for feature representation.Experiments have been made on JAFFE and RaFD,they prove that the algorithm has the characteristics of high robustness,low dimension of feature vectors,short classification time and better recognition rates on facial expression recognition in different regions and with different occlusion degrees.%针对局部遮挡条件下的人脸表情识别,提出一种新的基于 Gabor 滤波和灰度共生矩阵的表情识别算法。首先设计一种分块提取 Gabor 特征统计量的方法,生成一个低维 Gabor 特征向量;然后,考虑到分块的 Gabor 特征缺失了像素之间的关联性,将反映像素间位置分布特性的灰度共生矩阵引入到表情识别领域,以此来弥补 Gabor 特征分块处理产生的不足;最后,将提取的低维Gabor特征向量和灰度共生矩阵纹理特征进行线性叠加,高斯归一化后生成一组用于特征表达的低维特征向量。在日本女性人脸表情库和荷兰内梅亨大学人脸数据库上的实验证明该算法对人脸不同区域、不同程度遮挡

  20. Brain functional changes in facial expression recognition in patients with major depressive disorder before and after antidepressant treatment A functional magnetic resonance imaging study

    Institute of Scientific and Technical Information of China (English)

    Wenyan Jiang; Zhongmin Yin; Yixin Pang; Feng Wu; Lingtao Kong; Ke Xu

    2012-01-01

    Functional magnetic resonance imaging was used during emotion recognition to identify changes in functional brain activation in 21 first-episode, treatment-naive major depressive disorder patients before and after antidepressant treatment. Following escitalopram oxalate treatment, patients exhibited decreased activation in bilateral precentral gyrus, bilateral middle frontal gyrus, left middle temporal gyrus, bilateral postcentral gyrus, left cingulate and right parahippocampal gyrus, and increased activation in right superior frontal gyrus, bilateral superior parietal lobule and left occipital gyrus during sad facial expression recognition. After antidepressant treatment, patients also exhibited decreased activation in the bilateral middle frontal gyrus, bilateral cingulate and right parahippocampal gyrus, and increased activation in the right inferior frontal gyrus, left fusiform gyrus and right precuneus during happy facial expression recognition. Our experimental findings indicate that the limbic-cortical network might be a key target region for antidepressant treatment in major depressive disorder.