WorldWideScience

Sample records for facial recognition test

  1. Facial Recognition

    National Research Council Canada - National Science Library

    Mihalache Sergiu; Stoica Mihaela-Zoica

    2014-01-01

    .... From birth, faces are important in the individual's social interaction. Face perceptions are very complex as the recognition of facial expressions involves extensive and diverse areas in the brain...

  2. Facial Recognition

    Directory of Open Access Journals (Sweden)

    Mihalache Sergiu

    2014-05-01

    Full Text Available During their lifetime, people learn to recognize thousands of faces that they interact with. Face perception refers to an individual's understanding and interpretation of the face, particularly the human face, especially in relation to the associated information processing in the brain. The proportions and expressions of the human face are important to identify origin, emotional tendencies, health qualities, and some social information. From birth, faces are important in the individual's social interaction. Face perceptions are very complex as the recognition of facial expressions involves extensive and diverse areas in the brain. Our main goal is to put emphasis on presenting human faces specialized studies, and also to highlight the importance of attractiviness in their retention. We will see that there are many factors that influence face recognition.

  3. PCA facial expression recognition

    Science.gov (United States)

    El-Hori, Inas H.; El-Momen, Zahraa K.; Ganoun, Ali

    2013-12-01

    This paper explores and compares techniques for automatically recognizing facial actions in sequences of images. The comparative study of Facial Expression Recognition (FER) techniques namely Principal Component's analysis (PCA) and PCA with Gabor filters (GF) is done. The objective of this research is to show that PCA with Gabor filters is superior to the first technique in terms of recognition rate. To test and evaluates their performance, experiments are performed using real database by both techniques. The universally accepted five principal emotions to be recognized are: Happy, Sad, Disgust and Angry along with Neutral. The recognition rates are obtained on all the facial expressions.

  4. Facial Expression Recognition

    NARCIS (Netherlands)

    Pantic, Maja; Li, S.; Jain, A.

    2009-01-01

    Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial

  5. Development of Facial Emotion Recognition in Childhood : Age-related Differences in a Shortened Version of the Facial Expressions of Emotion - Stimuli and Tests

    NARCIS (Netherlands)

    Coenen, Maraike; Aarnoudse, Ceciel; Huitema, Rients; Braams, Olga; Veenstra, Wencke S.

    2013-01-01

    Introduction Facial emotion recognition is essential for social interaction. The development of emotion recognition abilities is not yet entirely understood (Tonks et al. 2007). Facial emotion recognition emerges gradually, with happiness recognized earliest (Herba & Phillips, 2004). The recognition

  6. Development of Facial Emotion Recognition in Childhood : Age-related Differences in a Shortened Version of the Facial Expressions of Emotion - Stimuli and Tests

    NARCIS (Netherlands)

    Coenen, Maraike; Aarnoudse, Ceciel; Huitema, Rients; Braams, Olga; Veenstra, Wencke S.

    2013-01-01

    Introduction Facial emotion recognition is essential for social interaction. The development of emotion recognition abilities is not yet entirely understood (Tonks et al. 2007). Facial emotion recognition emerges gradually, with happiness recognized earliest (Herba & Phillips, 2004). The recognition

  7. Test battery for measuring the perception and recognition of facial expressions of emotion

    Science.gov (United States)

    Wilhelm, Oliver; Hildebrandt, Andrea; Manske, Karsten; Schacht, Annekathrin; Sommer, Werner

    2014-01-01

    Despite the importance of perceiving and recognizing facial expressions in everyday life, there is no comprehensive test battery for the multivariate assessment of these abilities. As a first step toward such a compilation, we present 16 tasks that measure the perception and recognition of facial emotion expressions, and data illustrating each task's difficulty and reliability. The scoring of these tasks focuses on either the speed or accuracy of performance. A sample of 269 healthy young adults completed all tasks. In general, accuracy and reaction time measures for emotion-general scores showed acceptable and high estimates of internal consistency and factor reliability. Emotion-specific scores yielded lower reliabilities, yet high enough to encourage further studies with such measures. Analyses of task difficulty revealed that all tasks are suitable for measuring emotion perception and emotion recognition related abilities in normal populations. PMID:24860528

  8. [Neurological disease and facial recognition].

    Science.gov (United States)

    Kawamura, Mitsuru; Sugimoto, Azusa; Kobayakawa, Mutsutaka; Tsuruya, Natsuko

    2012-07-01

    To discuss the neurological basis of facial recognition, we present our case reports of impaired recognition and a review of previous literature. First, we present a case of infarction and discuss prosopagnosia, which has had a large impact on face recognition research. From a study of patient symptoms, we assume that prosopagnosia may be caused by unilateral right occipitotemporal lesion and right cerebral dominance of facial recognition. Further, circumscribed lesion and degenerative disease may also cause progressive prosopagnosia. Apperceptive prosopagnosia is observed in patients with posterior cortical atrophy (PCA), pathologically considered as Alzheimer's disease, and associative prosopagnosia in frontotemporal lobar degeneration (FTLD). Second, we discuss face recognition as part of communication. Patients with Parkinson disease show social cognitive impairments, such as difficulty in facial expression recognition and deficits in theory of mind as detected by the reading the mind in the eyes test. Pathological and functional imaging studies indicate that social cognitive impairment in Parkinson disease is possibly related to damages in the amygdalae and surrounding limbic system. The social cognitive deficits can be observed in the early stages of Parkinson disease, and even in the prodromal stage, for example, patients with rapid eye movement (REM) sleep behavior disorder (RBD) show impairment in facial expression recognition. Further, patients with myotonic dystrophy type 1 (DM 1), which is a multisystem disease that mainly affects the muscles, show social cognitive impairment similar to that of Parkinson disease. Our previous study showed that facial expression recognition impairment of DM 1 patients is associated with lesion in the amygdalae and insulae. Our study results indicate that behaviors and personality traits in DM 1 patients, which are revealed by social cognitive impairment, are attributable to dysfunction of the limbic system.

  9. [Prosopagnosia and facial expression recognition].

    Science.gov (United States)

    Koyama, Shinichi

    2014-04-01

    This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.

  10. Italian normative data and validation of two neuropsychological tests of face recognition: Benton Facial Recognition Test and Cambridge Face Memory Test.

    Science.gov (United States)

    Albonico, Andrea; Malaspina, Manuela; Daini, Roberta

    2017-06-21

    The Benton Facial Recognition Test (BFRT) and Cambridge Face Memory Test (CFMT) are two of the most common tests used to assess face discrimination and recognition abilities and to identify individuals with prosopagnosia. However, recent studies highlighted that participant-stimulus match ethnicity, as much as gender, has to be taken into account in interpreting results from these tests. Here, in order to obtain more appropriate normative data for an Italian sample, the CFMT and BFRT were administered to a large cohort of young adults. We found that scores from the BFRT are not affected by participants' gender and are only slightly affected by participant-stimulus ethnicity match, whereas both these factors seem to influence the scores of the CFMT. Moreover, the inclusion of a sample of individuals with suspected face recognition impairment allowed us to show that the use of more appropriate normative data can increase the BFRT efficacy in identifying individuals with face discrimination impairments; by contrast, the efficacy of the CFMT in classifying individuals with a face recognition deficit was confirmed. Finally, our data show that the lack of inversion effect (the difference between the total score of the upright and inverted versions of the CFMT) could be used as further index to assess congenital prosopagnosia. Overall, our results confirm the importance of having norms derived from controls with a similar experience of faces as the "potential" prosopagnosic individuals when assessing face recognition abilities.

  11. The Facial Expressive Action Stimulus Test. A test battery for the assessment of face memory, face and object perception, configuration processing, and facial expression recognition.

    Science.gov (United States)

    de Gelder, Beatrice; Huis In 't Veld, Elisabeth M J; Van den Stock, Jan

    2015-01-01

    There are many ways to assess face perception skills. In this study, we describe a novel task battery FEAST (Facial Expressive Action Stimulus Test) developed to test recognition of identity and expressions of human faces as well as stimulus control categories. The FEAST consists of a neutral and emotional face memory task, a face and shoe identity matching task, a face and house part-to-whole matching task, and a human and animal facial expression matching task. The identity and part-to-whole matching tasks contain both upright and inverted conditions. The results provide reference data of a healthy sample of controls in two age groups for future users of the FEAST.

  12. Wavelet based approach for facial expression recognition

    Directory of Open Access Journals (Sweden)

    Zaenal Abidin

    2015-03-01

    Full Text Available Facial expression recognition is one of the most active fields of research. Many facial expression recognition methods have been developed and implemented. Neural networks (NNs have capability to undertake such pattern recognition tasks. The key factor of the use of NN is based on its characteristics. It is capable in conducting learning and generalizing, non-linear mapping, and parallel computation. Backpropagation neural networks (BPNNs are the approach methods that mostly used. In this study, BPNNs were used as classifier to categorize facial expression images into seven-class of expressions which are anger, disgust, fear, happiness, sadness, neutral and surprise. For the purpose of feature extraction tasks, three discrete wavelet transforms were used to decompose images, namely Haar wavelet, Daubechies (4 wavelet and Coiflet (1 wavelet. To analyze the proposed method, a facial expression recognition system was built. The proposed method was tested on static images from JAFFE database.

  13. Emotion recognition from facial expressions: a normative study of the Ekman 60-Faces Test in the Italian population.

    Science.gov (United States)

    Dodich, Alessandra; Cerami, Chiara; Canessa, Nicola; Crespi, Chiara; Marcone, Alessandra; Arpone, Marta; Realmuto, Sabrina; Cappa, Stefano F

    2014-07-01

    The Ekman 60-Faces (EK-60F) Test is a well-known neuropsychological tool assessing emotion recognition from facial expressions. It is the most employed task for research purposes in psychiatric and neurological disorders, including neurodegenerative diseases, such as the behavioral variant of Frontotemporal Dementia (bvFTD). Despite its remarkable usefulness in the social cognition research field, to date, there are still no normative data for the Italian population, thus limiting its application in a clinical context. In this study, we report procedures and normative data for the Italian version of the test. A hundred and thirty-two healthy Italian participants aged between 20 and 79 years with at least 5 years of education were recruited on a voluntary basis. They were administered the EK-60F Test from the Ekman and Friesen series of Pictures of Facial Affect after a preliminary semantic recognition test of the six basic emotions (i.e., anger, fear, sadness, happiness, disgust, surprise). Data were analyzed according to the Capitani procedure [1]. The regression analysis revealed significant effects of demographic variables, with younger, more educated, female subjects showing higher scores. Normative data were then applied to a sample of 15 bvFTD patients which showed global impaired performance in the task, consistently with the clinical condition. We provided EK-60F Test normative data for the Italian population allowing the investigation of global emotion recognition ability as well as selective impairment of basic emotions recognition, both for clinical and research purposes.

  14. Facial Expression at Retrieval Affects Recognition of Facial Identity

    Directory of Open Access Journals (Sweden)

    Wenfeng eChen

    2015-06-01

    Full Text Available It is well known that memory can be modulated by emotional stimuli at the time of encoding and consolidation. For example, happy faces create better identity recognition than faces with certain other expressions. However, the influence of facial expression at the time of retrieval remains unknown in the literature. To separate the potential influence of expression at retrieval from its effects at earlier stages, we had participants learn neutral faces but manipulated facial expression at the time of memory retrieval in a standard old/new recognition task. The results showed a clear effect of facial expression, where happy test faces were identified more successfully than angry test faces. This effect is unlikely due to greater image similarity between the neutral learning face and the happy test face, because image analysis showed that the happy test faces are in fact less similar to the neutral learning faces relative to the angry test faces. In the second experiment, we investigated whether this emotional effect is influenced by the expression at the time of learning. We employed angry or happy faces as learning stimuli, and angry, happy, and neutral faces as test stimuli. The results showed that the emotional effect at retrieval is robust across different encoding conditions with happy or angry expressions. These findings indicate that emotional expressions affect the retrieval process in identity recognition, and identity recognition does not rely on emotional association between learning and test faces.

  15. Facial Expression Recognition Using SVM Classifier

    OpenAIRE

    2015-01-01

    Facial feature tracking and facial actions recognition from image sequence attracted great attention in computer vision field. Computational facial expression analysis is a challenging research topic in computer vision. It is required by many applications such as human-computer interaction, computer graphic animation and automatic facial expression recognition. In recent years, plenty of computer vision techniques have been developed to track or recognize the facial activities in three levels...

  16. Facial emotion recognition in remitted depressed women.

    Science.gov (United States)

    Biyik, Utku; Keskin, Duygu; Oguz, Kaya; Akdeniz, Fisun; Gonul, Ali Saffet

    2015-10-01

    Although major depressive disorder (MDD) is primarily characterized by mood symptoms, depressed patients have impairments in facial emotion recognition in many of the basic emotions (anger, fear, happiness, surprise, disgust and sadness). On the other hand, the data in remitted MDD (rMDD) patients is inconsistent and it is not clear that if those impairments persist in remission. To extend the current findings, we applied facial emotion recognition test to a group of remitted depressed women and compared to those of controls. Analyses of variance results showed a significant emotion and group interaction, and in the post hoc analyses, rMDD patients had higher accuracy rate for recognition of sadness compared to those of controls. There were no differences in the reaction time among the patients and controls across the all the basic emotions. The higher recognition rates for sad faces in rMDD patients might contribute to the impairments in social communication and the prognosis of the disease.

  17. Facial Action Units Recognition: A Comparative Study

    NARCIS (Netherlands)

    Popa, M.C.; Rothkrantz, L.J.M.; Wiggers, P.; Braspenning, R.A.C.; Shan, C.

    2011-01-01

    Many approaches to facial expression recognition focus on assessing the six basic emotions (anger, disgust, happiness, fear, sadness, and surprise). Real-life situations proved to produce many more subtle facial expressions. A reliable way of analyzing the facial behavior is the Facial Action Coding

  18. Facial Action Units Recognition: A Comparative Study

    NARCIS (Netherlands)

    Popa, M.C.; Rothkrantz, L.J.M.; Wiggers, P.; Braspenning, R.A.C.; Shan, C.

    2011-01-01

    Many approaches to facial expression recognition focus on assessing the six basic emotions (anger, disgust, happiness, fear, sadness, and surprise). Real-life situations proved to produce many more subtle facial expressions. A reliable way of analyzing the facial behavior is the Facial Action Coding

  19. Simultaneous facial feature tracking and facial expression recognition.

    Science.gov (United States)

    Li, Yongqiang; Wang, Shangfei; Zhao, Yongping; Ji, Qiang

    2013-07-01

    The tracking and recognition of facial activities from images or videos have attracted great attention in computer vision field. Facial activities are characterized by three levels. First, in the bottom level, facial feature points around each facial component, i.e., eyebrow, mouth, etc., capture the detailed face shape information. Second, in the middle level, facial action units, defined in the facial action coding system, represent the contraction of a specific set of facial muscles, i.e., lid tightener, eyebrow raiser, etc. Finally, in the top level, six prototypical facial expressions represent the global facial muscle movement and are commonly used to describe the human emotion states. In contrast to the mainstream approaches, which usually only focus on one or two levels of facial activities, and track (or recognize) them separately, this paper introduces a unified probabilistic framework based on the dynamic Bayesian network to simultaneously and coherently represent the facial evolvement in different levels, their interactions and their observations. Advanced machine learning methods are introduced to learn the model based on both training data and subjective prior knowledge. Given the model and the measurements of facial motions, all three levels of facial activities are simultaneously recognized through a probabilistic inference. Extensive experiments are performed to illustrate the feasibility and effectiveness of the proposed model on all three level facial activities.

  20. Fusing Facial Features for Face Recognition

    Directory of Open Access Journals (Sweden)

    Jamal Ahmad Dargham

    2012-06-01

    Full Text Available Face recognition is an important biometric method because of its potential applications in many fields, such as access control, surveillance, and human-computer interaction. In this paper, a face recognition system that fuses the outputs of three face recognition systems based on Gabor jets is presented. The first system uses the magnitude, the second uses the phase, and the third uses the phase-weighted magnitude of the jets. The jets are generated from facial landmarks selected using three selection methods. It was found out that fusing the facial features gives better recognition rate than either facial feature used individually regardless of the landmark selection method.

  1. Development of Emotional Facial Recognition in Late Childhood and Adolescence

    Science.gov (United States)

    Thomas, Laura A.; De Bellis, Michael D.; Graham, Reiko; Labar, Kevin S.

    2007-01-01

    The ability to interpret emotions in facial expressions is crucial for social functioning across the lifespan. Facial expression recognition develops rapidly during infancy and improves with age during the preschool years. However, the developmental trajectory from late childhood to adulthood is less clear. We tested older children, adolescents…

  2. Development of Emotional Facial Recognition in Late Childhood and Adolescence

    Science.gov (United States)

    Thomas, Laura A.; De Bellis, Michael D.; Graham, Reiko; Labar, Kevin S.

    2007-01-01

    The ability to interpret emotions in facial expressions is crucial for social functioning across the lifespan. Facial expression recognition develops rapidly during infancy and improves with age during the preschool years. However, the developmental trajectory from late childhood to adulthood is less clear. We tested older children, adolescents…

  3. Facial expression recognition using thermal image.

    Science.gov (United States)

    Jiang, Guotai; Song, Xuemin; Zheng, Fuhui; Wang, Peipei; Omer, Ashgan

    2005-01-01

    Facial expression recognition will be studied in this paper using mathematics morphology, through drawing and analyzing the whole geometry characteristics and some geometry characteristics of the interesting area of Infrared Thermal Imaging (IRTI). The results show that geometry characteristic in the interesting region of different expression are obviously different; Facial temperature changes almost with the expression at the same time. Studies have shown feasibility of facial expression recognition on the basis of IRTI. This method can be used to monitor the facial expression in real time, which can be used in auxiliary diagnosis and medical on disease.

  4. Robust facial expression recognition via compressive sensing.

    Science.gov (United States)

    Zhang, Shiqing; Zhao, Xiaoming; Lei, Bicheng

    2012-01-01

    Recently, compressive sensing (CS) has attracted increasing attention in the areas of signal processing, computer vision and pattern recognition. In this paper, a new method based on the CS theory is presented for robust facial expression recognition. The CS theory is used to construct a sparse representation classifier (SRC). The effectiveness and robustness of the SRC method is investigated on clean and occluded facial expression images. Three typical facial features, i.e., the raw pixels, Gabor wavelets representation and local binary patterns (LBP), are extracted to evaluate the performance of the SRC method. Compared with the nearest neighbor (NN), linear support vector machines (SVM) and the nearest subspace (NS), experimental results on the popular Cohn-Kanade facial expression database demonstrate that the SRC method obtains better performance and stronger robustness to corruption and occlusion on robust facial expression recognition tasks.

  5. [Measuring impairment of facial affects recognition in schizophrenia. Preliminary study of the facial emotions recognition task (TREF)].

    Science.gov (United States)

    Gaudelus, B; Virgile, J; Peyroux, E; Leleu, A; Baudouin, J-Y; Franck, N

    2015-06-01

    The impairment of social cognition, including facial affects recognition, is a well-established trait in schizophrenia, and specific cognitive remediation programs focusing on facial affects recognition have been developed by different teams worldwide. However, even though social cognitive impairments have been confirmed, previous studies have also shown heterogeneity of the results between different subjects. Therefore, assessment of personal abilities should be measured individually before proposing such programs. Most research teams apply tasks based on facial affects recognition by Ekman et al. or Gur et al. However, these tasks are not easily applicable in a clinical exercise. Here, we present the Facial Emotions Recognition Test (TREF), which is designed to identify facial affects recognition impairments in a clinical practice. The test is composed of 54 photos and evaluates abilities in the recognition of six universal emotions (joy, anger, sadness, fear, disgust and contempt). Each of these emotions is represented with colored photos of 4 different models (two men and two women) at nine intensity levels from 20 to 100%. Each photo is presented during 10 seconds; no time limit for responding is applied. The present study compared the scores of the TREF test in a sample of healthy controls (64 subjects) and people with stabilized schizophrenia (45 subjects) according to the DSM IV-TR criteria. We analysed global scores for all emotions, as well as sub scores for each emotion between these two groups, taking into account gender differences. Our results were coherent with previous findings. Applying TREF, we confirmed an impairment in facial affects recognition in schizophrenia by showing significant differences between the two groups in their global results (76.45% for healthy controls versus 61.28% for people with schizophrenia), as well as in sub scores for each emotion except for joy. Scores for women were significantly higher than for men in the population

  6. Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems

    Directory of Open Access Journals (Sweden)

    Muhammad Hameed Siddiqi

    2013-12-01

    Full Text Available Over the last decade, human facial expressions recognition (FER has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER.

  7. Traditional facial tattoos disrupt face recognition processes.

    Science.gov (United States)

    Buttle, Heather; East, Julie

    2010-01-01

    Factors that are important to successful face recognition, such as features, configuration, and pigmentation/reflectance, are all subject to change when a face has been engraved with ink markings. Here we show that the application of facial tattoos, in the form of spiral patterns (typically associated with the Maori tradition of a Moko), disrupts face recognition to a similar extent as face inversion, with recognition accuracy little better than chance performance (2AFC). These results indicate that facial tattoos can severely disrupt our ability to recognise a face that previously did not have the pattern.

  8. Facial expression recognition in perceptual color space.

    Science.gov (United States)

    Lajevardi, Seyed Mehdi; Wu, Hong Ren

    2012-08-01

    This paper introduces a tensor perceptual color framework (TPCF) for facial expression recognition (FER), which is based on information contained in color facial images. The TPCF enables multi-linear image analysis in different color spaces and demonstrates that color components provide additional information for robust FER. Using this framework, the components (in either RGB, YCbCr, CIELab or CIELuv space) of color images are unfolded to two-dimensional (2- D) tensors based on multi-linear algebra and tensor concepts, from which the features are extracted by Log-Gabor filters. The mutual information quotient (MIQ) method is employed for feature selection. These features are classified using a multi-class linear discriminant analysis (LDA) classifier. The effectiveness of color information on FER using low-resolution and facial expression images with illumination variations is assessed for performance evaluation. Experimental results demonstrate that color information has significant potential to improve emotion recognition performance due to the complementary characteristics of image textures. Furthermore, the perceptual color spaces (CIELab and CIELuv) are better overall for facial expression recognition than other color spaces by providing more efficient and robust performance for facial expression recognition using facial images with illumination variation.

  9. Facial emotion recognition in paranoid schizophrenia and autism spectrum disorder.

    Science.gov (United States)

    Sachse, Michael; Schlitt, Sabine; Hainz, Daniela; Ciaramidaro, Angela; Walter, Henrik; Poustka, Fritz; Bölte, Sven; Freitag, Christine M

    2014-11-01

    Schizophrenia (SZ) and autism spectrum disorder (ASD) share deficits in emotion processing. In order to identify convergent and divergent mechanisms, we investigated facial emotion recognition in SZ, high-functioning ASD (HFASD), and typically developed controls (TD). Different degrees of task difficulty and emotion complexity (face, eyes; basic emotions, complex emotions) were used. Two Benton tests were implemented in order to elicit potentially confounding visuo-perceptual functioning and facial processing. Nineteen participants with paranoid SZ, 22 with HFASD and 20 TD were included, aged between 14 and 33 years. Individuals with SZ were comparable to TD in all obtained emotion recognition measures, but showed reduced basic visuo-perceptual abilities. The HFASD group was impaired in the recognition of basic and complex emotions compared to both, SZ and TD. When facial identity recognition was adjusted for, group differences remained for the recognition of complex emotions only. Our results suggest that there is a SZ subgroup with predominantly paranoid symptoms that does not show problems in face processing and emotion recognition, but visuo-perceptual impairments. They also confirm the notion of a general facial and emotion recognition deficit in HFASD. No shared emotion recognition deficit was found for paranoid SZ and HFASD, emphasizing the differential cognitive underpinnings of both disorders.

  10. Facial emotion recognition in patients with violent schizophrenia.

    Science.gov (United States)

    Demirbuga, Sedat; Sahin, Esat; Ozver, Ismail; Aliustaoglu, Suheyla; Kandemir, Eyup; Varkal, Mihriban D; Emul, Murat; Ince, Haluk

    2013-03-01

    People with schizophrenia are more likely considered to be violent than the general population. Besides some well described symptoms, patients with schizophrenia have problems in recognizing basic facial emotions which could underlie the misinterpretation of others' intentions that could lead to violent behaviors. We aimed to investigate the facial emotion recognition ability in violent or non-violent patients with schizophrenia. The severity in both groups was evaluated according to the Positive and Negative Syndrome Scale. A computer-based test included the photos of four male and four female models with happy, surprised, fearful, sad, angry, disgusted, and neutral facial expressions from Ekman & Friesen's series has been performed to groups. Totally, 41 outpatients with violent schizophrenia and 35 outpatients with non-violent schizophrenia participated in the study. The mean age of violent schizophrenia group was 41.50±7.56, and control group's mean age was 39.94±6.79years. There were no significant differences between groups among reaction time for each emotion while recognizing them (p>0.05). In addition, the accuracy rate of answers towards facial emotion recognition test for each emotion and the distribution misidentifications were not significantly different between groups (p>0.05). The facial emotion recognition in violent schizophrenia is lacking and we found that the facial emotion recognition ability in violent schizophrenia seems to be a trait feature of the illness. Copyright © 2012 Elsevier B.V. All rights reserved.

  11. Facial emotion recognition and alexithymia in adults with somatoform disorders.

    Science.gov (United States)

    Pedrosa Gil, Francisco; Ridout, Nathan; Kessler, Henrik; Neuffer, Michaela; Schoechlin, Claudia; Traue, Harald C; Nickel, Marius

    2009-01-01

    The primary aim of this study was to investigate facial emotion recognition in patients with somatoform disorders (SFD). Also of interest was the extent to which concurrent alexithymia contributed to any changes in emotion recognition accuracy. Twenty patients with SFD and twenty healthy, age, sex and education matched, controls were assessed with the Facially Expressed Emotion Labelling Test of facial emotion recognition and the 26-item Toronto Alexithymia Scale (TAS-26). Patients with SFD exhibited elevated alexithymia symptoms relative to healthy controls. Patients with SFD also recognized significantly fewer emotional expressions than did the healthy controls. However, the group difference in emotion recognition accuracy became nonsignificant once the influence of alexithymia was controlled for statistically. This suggests that the deficit in facial emotion recognition observed in the patients with SFD was most likely a consequence of concurrent alexithymia. Impaired facial emotion recognition observed in the patients with SFD could plausibly have a negative influence on these individuals' social functioning. (c) 2008 Wiley-Liss, Inc.

  12. Impaired facial emotion recognition in a ketamine model of psychosis.

    Science.gov (United States)

    Ebert, Andreas; Haussleiter, Ida Sibylle; Juckel, Georg; Brüne, Martin; Roser, Patrik

    2012-12-30

    Social cognitive disabilities are a common feature in schizophrenia. Given the role of glutamatergic neurotransmission in schizophrenia-related cognitive impairments, we investigated the effects of the glutamatergic NMDA receptor antagonist ketamine on facial emotion recognition. Eighteen healthy male subjects were tested on two occasions, one without medication and one after administration with subanesthetic doses of intravenous ketamine. Emotion recognition was examined using the Ekman 60 Faces Test. In addition, attention was measured by the Continuous Performance Test (CPT), and psychopathology was rated using the Psychotomimetic States Inventory (PSI). Ketamine produced a non-significant deterioration of global emotion recognition abilities. Specifically, the ability to correctly identify the facial expression of sadness was significantly reduced in the ketamine condition. These results were independent of psychotic symptoms and selective attention. Our results point to the involvement of the glutamatergic system in the ability to recognize facial emotions. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  13. Automatic recognition of facial movement for paralyzed face.

    Science.gov (United States)

    Wang, Ting; Dong, Junyu; Sun, Xin; Zhang, Shu; Wang, Shengke

    2014-01-01

    Facial nerve paralysis is a common disease due to nerve damage. Most approaches for evaluating the degree of facial paralysis rely on a set of different facial movements as commanded by doctors. Therefore, automatic recognition of the patterns of facial movement is fundamental to the evaluation of the degree of facial paralysis. In this paper, a novel method named Active Shape Models plus Local Binary Patterns (ASMLBP) is presented for recognizing facial movement patterns. Firstly, the Active Shape Models (ASMs) are used in the method to locate facial key points. According to these points, the face is divided into eight local regions. Then the descriptors of these regions are extracted by using Local Binary Patterns (LBP) to recognize the patterns of facial movement. The proposed ASMLBP method is tested on both the collected facial paralysis database with 57 patients and another publicly available database named the Japanese Female Facial Expression (JAFFE). Experimental results demonstrate that the proposed method is efficient for both paralyzed and normal faces.

  14. Mutual information-based facial expression recognition

    Science.gov (United States)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  15. Development of Facial Emotion Recognition in Childhood: Age-related Differences in a Shortened Version of the Facial Expression of Emotions - Stimuli and Tests. Data from an ongoing study.

    NARCIS (Netherlands)

    Coenen, Maraike; Aarnoudse, Ceciel; Braams, O.; Veenstra, Wencke S.

    2014-01-01

    OBJECTIVE: Facial emotion recognition is a crucial aspect of social cognition and deficits have been shown to be related to psychiatric disorders in adults and children. However, the development of facial emotion recognition is less clear (Herba & Philips, 2004) and an appropriate instrument to meas

  16. Development of Facial Emotion Recognition in Childhood: Age-related Differences in a Shortened Version of the Facial Expression of Emotions - Stimuli and Tests. Data from an ongoing study.

    NARCIS (Netherlands)

    Coenen, Maraike; Aarnoudse, Ceciel; Braams, O.; Veenstra, Wencke S.

    2014-01-01

    OBJECTIVE: Facial emotion recognition is a crucial aspect of social cognition and deficits have been shown to be related to psychiatric disorders in adults and children. However, the development of facial emotion recognition is less clear (Herba & Philips, 2004) and an appropriate instrument to

  17. Portable Facial Recognition Jukebox Using Fisherfaces (Frj

    Directory of Open Access Journals (Sweden)

    Richard Mo

    2016-03-01

    Full Text Available A portable real-time facial recognition system that is able to play personalized music based on the identified person’s preferences was developed. The system is called Portable Facial Recognition Jukebox Using Fisherfaces (FRJ. Raspberry Pi was used as the hardware platform for its relatively low cost and ease of use. This system uses the OpenCV open source library to implement the computer vision Fisherfaces facial recognition algorithms, and uses the Simple DirectMedia Layer (SDL library for playing the sound files. FRJ is cross-platform and can run on both Windows and Linux operating systems. The source code was written in C++. The accuracy of the recognition program can reach up to 90% under controlled lighting and distance conditions. The user is able to train up to 6 different people (as many as will fit in the GUI. When implemented on a Raspberry Pi, the system is able to go from image capture to facial recognition in an average time of 200ms.

  18. Fully automatic recognition of the temporal phases of facial actions.

    Science.gov (United States)

    Valstar, Michel F; Pantic, Maja

    2012-02-01

    Past work on automatic analysis of facial expressions has focused mostly on detecting prototypic expressions of basic emotions like happiness and anger. The method proposed here enables the detection of a much larger range of facial behavior by recognizing facial muscle actions [action units (AUs)] that compound expressions. AUs are agnostic, leaving the inference about conveyed intent to higher order decision making (e.g., emotion recognition). The proposed fully automatic method not only allows the recognition of 22 AUs but also explicitly models their temporal characteristics (i.e., sequences of temporal segments: neutral, onset, apex, and offset). To do so, it uses a facial point detector based on Gabor-feature-based boosted classifiers to automatically localize 20 facial fiducial points. These points are tracked through a sequence of images using a method called particle filtering with factorized likelihoods. To encode AUs and their temporal activation models based on the tracking data, it applies a combination of GentleBoost, support vector machines, and hidden Markov models. We attain an average AU recognition rate of 95.3% when tested on a benchmark set of deliberately displayed facial expressions and 72% when tested on spontaneous expressions.

  19. Facial Recognition in Uncontrolled Conditions for Information Security

    Directory of Open Access Journals (Sweden)

    Qinghan Xiao

    2010-01-01

    Full Text Available With the increasing use of computers nowadays, information security is becoming an important issue for private companies and government organizations. Various security technologies have been developed, such as authentication, authorization, and auditing. However, once a user logs on, it is assumed that the system would be controlled by the same person. To address this flaw, we developed a demonstration system that uses facial recognition technology to periodically verify the identity of the user. If the authenticated user's face disappears, the system automatically performs a log-off or screen-lock operation. This paper presents our further efforts in developing image preprocessing algorithms and dealing with angled facial images. The objective is to improve the accuracy of facial recognition under uncontrolled conditions. To compare the results with others, the frontal pose subset of the Face Recognition Technology (FERET database was used for the test. The experiments showed that the proposed algorithms provided promising results.

  20. Facial Recognition in Uncontrolled Conditions for Information Security

    Science.gov (United States)

    Xiao, Qinghan; Yang, Xue-Dong

    2010-12-01

    With the increasing use of computers nowadays, information security is becoming an important issue for private companies and government organizations. Various security technologies have been developed, such as authentication, authorization, and auditing. However, once a user logs on, it is assumed that the system would be controlled by the same person. To address this flaw, we developed a demonstration system that uses facial recognition technology to periodically verify the identity of the user. If the authenticated user's face disappears, the system automatically performs a log-off or screen-lock operation. This paper presents our further efforts in developing image preprocessing algorithms and dealing with angled facial images. The objective is to improve the accuracy of facial recognition under uncontrolled conditions. To compare the results with others, the frontal pose subset of the Face Recognition Technology (FERET) database was used for the test. The experiments showed that the proposed algorithms provided promising results.

  1. Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson's Disease

    National Research Council Canada - National Science Library

    Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul

    2016-01-01

    .... The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral...

  2. Face Recognition Based on Facial Features

    Directory of Open Access Journals (Sweden)

    Muhammad Sharif

    2012-08-01

    Full Text Available Commencing from the last decade several different methods have been planned and developed in the prospect of face recognition that is one of the chief stimulating zone in the area of image processing. Face recognitions processes have various applications in the prospect of security systems and crime investigation systems. The study is basically comprised of three phases, i.e., face detection, facial features extraction and face recognition. The first phase is the face detection process where region of interest i.e., features region is extracted. The 2nd phase is features extraction. Here face features i.e., eyes, nose and lips are extracted out commencing the extracted face area. The last module is the face recognition phase which makes use of the extracted left eye for the recognition purpose by combining features of Eigenfeatures and Fisherfeatures.

  3. Computer Recognition of Facial Profiles

    Science.gov (United States)

    1974-08-01

    z30 o u u cr W, 137 REFERENCES 1. Fischer , C. L., Pollock, D. K., Raddack, B., and Stevens, M. E., Optical Character Recognition, Spartan Books...K. W., and Haworth , P. A., "Automatic Shape betectinn for Programmed Terrain Classifica-’ tion," Proc. Soc. Photographic Instrumentation Engrs

  4. Relation between facial affect recognition and configural face processing in antipsychotic-free schizophrenia.

    Science.gov (United States)

    Fakra, Eric; Jouve, Elisabeth; Guillaume, Fabrice; Azorin, Jean-Michel; Blin, Olivier

    2015-03-01

    Deficit in facial affect recognition is a well-documented impairment in schizophrenia, closely connected to social outcome. This deficit could be related to psychopathology, but also to a broader dysfunction in processing facial information. In addition, patients with schizophrenia inadequately use configural information-a type of processing that relies on spatial relationships between facial features. To date, no study has specifically examined the link between symptoms and misuse of configural information in the deficit in facial affect recognition. Unmedicated schizophrenia patients (n = 30) and matched healthy controls (n = 30) performed a facial affect recognition task and a face inversion task, which tests aptitude to rely on configural information. In patients, regressions were carried out between facial affect recognition, symptom dimensions and inversion effect. Patients, compared with controls, showed a deficit in facial affect recognition and a lower inversion effect. Negative symptoms and lower inversion effect could account for 41.2% of the variance in facial affect recognition. This study confirms the presence of a deficit in facial affect recognition, and also of dysfunctional manipulation in configural information in antipsychotic-free patients. Negative symptoms and poor processing of configural information explained a substantial part of the deficient recognition of facial affect. We speculate that this deficit may be caused by several factors, among which independently stand psychopathology and failure in correctly manipulating configural information. PsycINFO Database Record (c) 2015 APA, all rights reserved.

  5. Facial Recognition Technology: An analysis with scope in India

    CERN Document Server

    Thorat, S B; Dandale, Jyoti P

    2010-01-01

    A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. One of the way is to do this is by comparing selected facial features from the image and a facial database.It is typically used in security systems and can be compared to other biometrics such as fingerprint or eye iris recognition systems. In this paper we focus on 3-D facial recognition system and biometric facial recognision system. We do critics on facial recognision system giving effectiveness and weaknesses. This paper also introduces scope of recognision system in India.

  6. ALTERED KINEMATICS OF FACIAL EMOTION EXPRESSION AND EMOTION RECOGNITION DEFICITS ARE UNRELATED IN PARKINSON'S DISEASE

    Directory of Open Access Journals (Sweden)

    Matteo Bologna

    2016-12-01

    Full Text Available Background: Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson’s disease (PD. However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. Objective: To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Methods: Eighteen patients with PD and 16 healthy controls were enrolled in the study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analysed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores and clinical and demographic data in patients were evaluated using the Spearman’s test and multiple regression analysis.Results: The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all Ps0.05. Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores and clinical and demographic data in patients (all Ps>0.05.Conclusion: The present results provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.

  7. Recognition of facial and musical emotions in Parkinson's disease.

    Science.gov (United States)

    Saenz, A; Doé de Maindreville, A; Henry, A; de Labbey, S; Bakchine, S; Ehrlé, N

    2013-03-01

    Patients with amygdala lesions were found to be impaired in recognizing the fear emotion both from face and from music. In patients with Parkinson's disease (PD), impairment in recognition of emotions from facial expressions was reported for disgust, fear, sadness and anger, but no studies had yet investigated this population for the recognition of emotions from both face and music. The ability to recognize basic universal emotions (fear, happiness and sadness) from both face and music was investigated in 24 medicated patients with PD and 24 healthy controls. The patient group was tested for language (verbal fluency tasks), memory (digit and spatial span), executive functions (Similarities and Picture Completion subtests of the WAIS III, Brixton and Stroop tests), visual attention (Bells test), and fulfilled self-assessment tests for anxiety and depression. Results showed that the PD group was significantly impaired for recognition of both fear and sadness emotions from facial expressions, whereas their performance in recognition of emotions from musical excerpts was not different from that of the control group. The scores of fear and sadness recognition from faces were neither correlated to scores in tests for executive and cognitive functions, nor to scores in self-assessment scales. We attributed the observed dissociation to the modality (visual vs. auditory) of presentation and to the ecological value of the musical stimuli that we used. We discuss the relevance of our findings for the care of patients with PD. © 2012 The Author(s) European Journal of Neurology © 2012 EFNS.

  8. Autonomous facial recognition system inspired by human visual system based logarithmical image visualization technique

    Science.gov (United States)

    Wan, Qianwen; Panetta, Karen; Agaian, Sos

    2017-05-01

    Autonomous facial recognition system is widely used in real-life applications, such as homeland border security, law enforcement identification and authentication, and video-based surveillance analysis. Issues like low image quality, non-uniform illumination as well as variations in poses and facial expressions can impair the performance of recognition systems. To address the non-uniform illumination challenge, we present a novel robust autonomous facial recognition system inspired by the human visual system based, so called, logarithmical image visualization technique. In this paper, the proposed method, for the first time, utilizes the logarithmical image visualization technique coupled with the local binary pattern to perform discriminative feature extraction for facial recognition system. The Yale database, the Yale-B database and the ATT database are used for computer simulation accuracy and efficiency testing. The extensive computer simulation demonstrates the method's efficiency, accuracy, and robustness of illumination invariance for facial recognition.

  9. Facial emotion recognition is inversely correlated with tremor severity in essential tremor.

    Science.gov (United States)

    Auzou, Nicolas; Foubert-Samier, Alexandra; Dupouy, Sandrine; Meissner, Wassilios G

    2014-04-01

    We here assess limbic and orbitofrontal control in 20 patients with essential tremor (ET) and 18 age-matched healthy controls using the Ekman Facial Emotion Recognition Task and the IOWA Gambling Task. Our results show an inverse relation between facial emotion recognition and tremor severity. ET patients also showed worse performance in joy and fear recognition, as well as subtle abnormalities in risk detection, but these differences did not reach significance after correction for multiple testing.

  10. Meta-Analysis of the First Facial Expression Recognition Challenge

    NARCIS (Netherlands)

    Valstar, M.F.; Mehu, M.; Jiang, Bihan; Pantic, Maja; Scherer, K.

    2012-01-01

    Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability

  11. Violent Media Consumption and the Recognition of Dynamic Facial Expressions

    Science.gov (United States)

    Kirsh, Steven J.; Mounts, Jeffrey R. W.; Olczak, Paul V.

    2006-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph.…

  12. Meta-Analysis of the First Facial Expression Recognition Challenge

    NARCIS (Netherlands)

    Valstar, M.F.; Mehu, M.; Jiang, Bihan; Pantic, Maja; Scherer, K.

    Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability

  13. Violent Media Consumption and the Recognition of Dynamic Facial Expressions

    Science.gov (United States)

    Kirsh, Steven J.; Mounts, Jeffrey R. W.; Olczak, Paul V.

    2006-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph.…

  14. Face to face: blocking facial mimicry can selectively impair recognition of emotional expressions.

    Science.gov (United States)

    Oberman, Lindsay M; Winkielman, Piotr; Ramachandran, Vilayanur S

    2007-01-01

    People spontaneously mimic a variety of behaviors, including emotional facial expressions. Embodied cognition theories suggest that mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. If so, blocking facial mimicry should impair recognition of expressions, especially of emotions that are simulated using facial musculature. The current research tested this hypothesis using four expressions (happy, disgust, fear, and sad) and two mimicry-interfering manipulations (1) biting on a pen and (2) chewing gum, as well as two control conditions. Experiment 1 used electromyography over cheek, mouth, and nose regions. The bite manipulation consistently activated assessed muscles, whereas the chew manipulation activated muscles only intermittently. Further, expressing happiness generated most facial action. Experiment 2 found that the bite manipulation interfered most with recognition of happiness. These findings suggest that facial mimicry differentially contributes to recognition of specific facial expressions, thus allowing for more refined predictions from embodied cognition theories.

  15. Facial Affect Recognition and Social Anxiety in Preschool Children

    Science.gov (United States)

    Ale, Chelsea M.; Chorney, Daniel B.; Brice, Chad S.; Morris, Tracy L.

    2010-01-01

    Research relating anxiety and facial affect recognition has focused mostly on school-aged children and adults and has yielded mixed results. The current study sought to demonstrate an association among behavioural inhibition and parent-reported social anxiety, shyness, social withdrawal and facial affect recognition performance in 30 children,…

  16. Regression-based Multi-View Facial Expression Recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja

    2010-01-01

    We present a regression-based scheme for multi-view facial expression recognition based on 2-D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a state-of-the-

  17. Facial Affect Recognition and Social Anxiety in Preschool Children

    Science.gov (United States)

    Ale, Chelsea M.; Chorney, Daniel B.; Brice, Chad S.; Morris, Tracy L.

    2010-01-01

    Research relating anxiety and facial affect recognition has focused mostly on school-aged children and adults and has yielded mixed results. The current study sought to demonstrate an association among behavioural inhibition and parent-reported social anxiety, shyness, social withdrawal and facial affect recognition performance in 30 children,…

  18. Influences on Facial Emotion Recognition in Deaf Children

    Science.gov (United States)

    Sidera, Francesc; Amadó, Anna; Martínez, Laura

    2017-01-01

    This exploratory research is aimed at studying facial emotion recognition abilities in deaf children and how they relate to linguistic skills and the characteristics of deafness. A total of 166 participants (75 deaf) aged 3-8 years were administered the following tasks: facial emotion recognition, naming vocabulary and cognitive ability. The…

  19. Regression-based Multi-View Facial Expression Recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja

    2010-01-01

    We present a regression-based scheme for multi-view facial expression recognition based on 2蚠D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a

  20. Application of data fusion in computer facial recognition

    Directory of Open Access Journals (Sweden)

    Wang Ai Qiang

    2013-11-01

    Full Text Available The recognition rate of single recognition method is inefficiency in computer facial recognition. We proposed a new confluent facial recognition method using data fusion technology, a variety of recognition algorithm are combined to form the fusion-based face recognition system to improve the recognition rate in many ways. Data fusion considers three levels of data fusion, feature level fusion and decision level fusion. And the data layer uses a simple weighted average algorithm, which is easy to implement. Artificial neural network algorithm was selected in feature layer and fuzzy reasoning algorithm was used in decision layer. Finally, we compared with the BP neural network algorithm in the MATLAB experimental platform. The result shows that the recognition rate has been greatly improved after adopting data fusion technology in computer facial recognition.

  1. Comparing the Recognition of Emotional Facial Expressions in Patients with

    Directory of Open Access Journals (Sweden)

    Abdollah Ghasempour

    2014-05-01

    Full Text Available Background: Recognition of emotional facial expressions is one of the psychological factors which involve in obsessive-compulsive disorder (OCD and major depressive disorder (MDD. The aim of present study was to compare the ability of recognizing emotional facial expressions in patients with Obsessive-Compulsive Disorder and major depressive disorder. Materials and Methods: The present study is a cross-sectional and ex-post facto investigation (causal-comparative method. Forty participants (20 patients with OCD, 20 patients with MDD were selected through available sampling method from the clients referred to Tabriz Bozorgmehr clinic. Data were collected through Structured Clinical Interview and Recognition of Emotional Facial States test. The data were analyzed utilizing MANOVA. Results: The obtained results showed that there is no significant difference between groups in the mean score of recognition emotional states of surprise, sadness, happiness and fear; but groups had a significant difference in the mean score of diagnosing disgust and anger states (p<0.05. Conclusion: Patients suffering from both OCD and MDD show equal ability to recognize surprise, sadness, happiness and fear. However, the former are less competent in recognizing disgust and anger than the latter.

  2. Facial expression recognition based on improved deep belief networks

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.

  3. Slowing down facial movements and vocal sounds enhances facial expression recognition and facial-vocal imitation in children with autism

    OpenAIRE

    Tardif, Carole; Lainé, France; Rodriguez, Mélissa; Gepner, Bruno

    2007-01-01

    International audience; This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on CD-Rom, under audio or silent conditions, and under dynamic visual conditions (slowly, very slowly, at normal speed) plus a st...

  4. Facial Expression Recognition in Nonvisual Imagery

    Science.gov (United States)

    Olague, Gustavo; Hammoud, Riad; Trujillo, Leonardo; Hernández, Benjamín; Romero, Eva

    This chapter presents two novel approaches that allow computer vision applications to perform human facial expression recognition (FER). From a prob lem standpoint, we focus on FER beyond the human visual spectrum, in long-wave infrared imagery, thus allowing us to offer illumination-independent solutions to this important human-computer interaction problem. From a methodological stand point, we introduce two different feature extraction techniques: a principal com ponent analysis-based approach with automatic feature selection and one based on texture information selected by an evolutionary algorithm. In the former, facial fea tures are selected based on interest point clusters, and classification is carried out us ing eigenfeature information; in the latter, an evolutionary-based learning algorithm searches for optimal regions of interest and texture features based on classification accuracy. Both of these approaches use a support vector machine-committee for classification. Results show effective performance for both techniques, from which we can conclude that thermal imagery contains worthwhile information for the FER problem beyond the human visual spectrum.

  5. FACIAL LANDMARKING LOCALIZATION FOR EMOTION RECOGNITION USING BAYESIAN SHAPE MODELS

    Directory of Open Access Journals (Sweden)

    Hernan F. Garcia

    2013-02-01

    Full Text Available This work presents a framework for emotion recognition, based in facial expression analysis using Bayesian Shape Models (BSM for facial landmarking localization. The Facial Action Coding System (FACS compliant facial feature tracking based on Bayesian Shape Model. The BSM estimate the parameters of the model with an implementation of the EM algorithm. We describe the characterization methodology from parametric model and evaluated the accuracy for feature detection and estimation of the parameters associated with facial expressions, analyzing its robustness in pose and local variations. Then, a methodology for emotion characterization is introduced to perform the recognition. The experimental results show that the proposed model can effectively detect the different facial expressions. Outperforming conventional approaches for emotion recognition obtaining high performance results in the estimation of emotion present in a determined subject. The model used and characterization methodology showed efficient to detect the emotion type in 95.6% of the cases.

  6. [Association between intelligence development and facial expression recognition ability in children with autism spectrum disorder].

    Science.gov (United States)

    Pan, Ning; Wu, Gui-Hua; Zhang, Ling; Zhao, Ya-Fen; Guan, Han; Xu, Cai-Juan; Jing, Jin; Jin, Yu

    2017-03-01

    To investigate the features of intelligence development, facial expression recognition ability, and the association between them in children with autism spectrum disorder (ASD). A total of 27 ASD children aged 6-16 years (ASD group, full intelligence quotient >70) and age- and gender-matched normally developed children (control group) were enrolled. Wechsler Intelligence Scale for Children Fourth Edition and Chinese Static Facial Expression Photos were used for intelligence evaluation and facial expression recognition test. Compared with the control group, the ASD group had significantly lower scores of full intelligence quotient, verbal comprehension index, perceptual reasoning index (PRI), processing speed index(PSI), and working memory index (WMI) (Pchildren have delayed intelligence development compared with normally developed children and impaired expression recognition ability. Perceptual reasoning and working memory abilities are positively correlated with expression recognition ability, which suggests that insufficient perceptual reasoning and working memory abilities may be important factors affecting facial expression recognition ability in ASD children.

  7. Automatic recognition of emotions from facial expressions

    Science.gov (United States)

    Xue, Henry; Gertner, Izidor

    2014-06-01

    In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).

  8. Facial Expression Recognition Using Stationary Wavelet Transform Features

    Directory of Open Access Journals (Sweden)

    Huma Qayyum

    2017-01-01

    Full Text Available Humans use facial expressions to convey personal feelings. Facial expressions need to be automatically recognized to design control and interactive applications. Feature extraction in an accurate manner is one of the key steps in automatic facial expression recognition system. Current frequency domain facial expression recognition systems have not fully utilized the facial elements and muscle movements for recognition. In this paper, stationary wavelet transform is used to extract features for facial expression recognition due to its good localization characteristics, in both spectral and spatial domains. More specifically a combination of horizontal and vertical subbands of stationary wavelet transform is used as these subbands contain muscle movement information for majority of the facial expressions. Feature dimensionality is further reduced by applying discrete cosine transform on these subbands. The selected features are then passed into feed forward neural network that is trained through back propagation algorithm. An average recognition rate of 98.83% and 96.61% is achieved for JAFFE and CK+ dataset, respectively. An accuracy of 94.28% is achieved for MS-Kinect dataset that is locally recorded. It has been observed that the proposed technique is very promising for facial expression recognition when compared to other state-of-the-art techniques.

  9. Altered Kinematics of Facial Emotion Expression and Emotion Recognition Deficits Are Unrelated in Parkinson's Disease.

    Science.gov (United States)

    Bologna, Matteo; Berardelli, Isabella; Paparella, Giulia; Marsili, Luca; Ricciardi, Lucia; Fabbrini, Giovanni; Berardelli, Alfredo

    2016-01-01

    Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson's disease (PD). However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Eighteen patients with PD and 16 healthy controls were enrolled in this study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analyzed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients were evaluated using the Spearman's test and multiple regression analysis. The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all Ps Ekman global score and disgust, sadness, and fear sub-scores than healthy controls (all Ps emotion recognition deficits were unrelated in patients (all Ps > 0.05). Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores, and clinical and demographic data in patients (all Ps > 0.05). The results in this study provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.

  10. Facial Emotion Recognition in Bipolar Disorder and Healthy Aging.

    Science.gov (United States)

    Altamura, Mario; Padalino, Flavia A; Stella, Eleonora; Balzotti, Angela; Bellomo, Antonello; Palumbo, Rocco; Di Domenico, Alberto; Mammarella, Nicola; Fairfield, Beth

    2016-03-01

    Emotional face recognition is impaired in bipolar disorder, but it is not clear whether this is specific for the illness. Here, we investigated how aging and bipolar disorder influence dynamic emotional face recognition. Twenty older adults, 16 bipolar patients, and 20 control subjects performed a dynamic affective facial recognition task and a subsequent rating task. Participants pressed a key as soon as they were able to discriminate whether the neutral face was assuming a happy or angry facial expression and then rated the intensity of each facial expression. Results showed that older adults recognized happy expressions faster, whereas bipolar patients recognized angry expressions faster. Furthermore, both groups rated emotional faces more intensely than did the control subjects. This study is one of the first to compare how aging and clinical conditions influence emotional facial recognition and underlines the need to consider the role of specific and common factors in emotional face recognition.

  11. Meta-Analysis of the First Facial Expression Recognition Challenge.

    Science.gov (United States)

    Valstar, M F; Mehu, M; Bihan Jiang; Pantic, M; Scherer, K

    2012-08-01

    Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability have received some attention; for instance, there exist a number of commonly used facial expression databases. However, lack of a commonly accepted evaluation protocol and, typically, lack of sufficient details needed to reproduce the reported individual results make it difficult to compare systems. This, in turn, hinders the progress of the field. A periodical challenge in facial expression recognition would allow such a comparison on a level playing field. It would provide an insight on how far the field has come and would allow researchers to identify new goals, challenges, and targets. This paper presents a meta-analysis of the first such challenge in automatic recognition of facial expressions, held during the IEEE conference on Face and Gesture Recognition 2011. It details the challenge data, evaluation protocol, and the results attained in two subchallenges: AU detection and classification of facial expression imagery in terms of a number of discrete emotion categories. We also summarize the lessons learned and reflect on the future of the field of facial expression recognition in general and on possible future challenges in particular.

  12. The review and results of different methods for facial recognition

    Science.gov (United States)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  13. [Recognition of facial emotions and theory of mind in schizophrenia: could the theory of mind deficit be due to the non-recognition of facial emotions?].

    Science.gov (United States)

    Besche-Richard, C; Bourrin-Tisseron, A; Olivier, M; Cuervo-Lombard, C-V; Limosin, F

    2012-06-01

    The deficits of recognition of facial emotions and attribution of mental states are now well-documented in schizophrenic patients. However, we don't clearly know about the link between these two complex cognitive functions, especially in schizophrenia. In this study, we attempted to test the link between the recognition of facial emotions and the capacities of mentalization, notably the attribution of beliefs, in health and schizophrenic participants. We supposed that the level of performance of recognition of facial emotions, compared to the working memory and executive functioning, was the best predictor of the capacities to attribute a belief. Twenty schizophrenic participants according to DSM-IVTR (mean age: 35.9 years, S.D. 9.07; mean education level: 11.15 years, S.D. 2.58) clinically stabilized, receiving neuroleptic or antipsychotic medication participated in the study. They were matched on age (mean age: 36.3 years, S.D. 10.9) and educational level (mean educational level: 12.10, S.D. 2.25) with 30 matched healthy participants. All the participants were evaluated with a pool of tasks testing the recognition of facial emotions (the faces of Baron-Cohen), the attribution of beliefs (two stories of first order and two stories of second order), the working memory (the digit span of the WAIS-III and the Corsi test) and the executive functioning (Trail Making Test A et B, Wisconsin Card Sorting Test brief version). Comparing schizophrenic and healthy participants, our results confirmed a difference between the performances of the recognition of facial emotions and those of the attribution of beliefs. The result of the simple linear regression showed that the recognition of facial emotions, compared to the performances of working memory and executive functioning, was the best predictor of the performances in the theory of mind stories. Our results confirmed, in a sample of schizophrenic patients, the deficits in the recognition of facial emotions and in the

  14. Primary vision and facial emotion recognition in early Parkinson's disease.

    Science.gov (United States)

    Hipp, Géraldine; Diederich, Nico J; Pieria, Vannina; Vaillant, Michel

    2014-03-15

    In early stages of idiopathic Parkinson's disease (IPD), lower order vision (LOV) deficits including reduced colour and contrast discrimination have been consistently reported. Data are less conclusive concerning higher order vision (HOV) deficits, especially for facial emotion recognition (FER). However, a link between both visual levels has been hypothesized. To screen for both levels of visual impairment in early IPD. We prospectively recruited 28 IPD patients with disease duration of 1.4+/-0.8 years and 25 healthy controls. LOV was evaluated by Farnsworth-Munsell 100 Hue Test, Vis-Tech and Pelli-Robson test. HOV was examined by the Ekman 60 Faces Test and part A of the Visual Object and Space recognition test. IPD patients performed worse than controls on almost all LOV tests. The most prominent difference was seen for contrast perception at the lowest spatial frequency (p=0.0002). Concerning FER IPD patients showed reduced recognition of "sadness" (p=0.01). "Fear" perception was correlated with perception of low contrast sensitivity in IPD patients within the lowest performance quartile. Controls showed a much stronger link between "fear" perception" and low contrast detection. At the early IPD stage there are marked deficits of LOV performances, while HOV performances are still intact, with the exception of reduced recognition of "sadness". At this stage, IPD patients seem still to compensate the deficient input of low contrast sensitivity, known to be pivotal for appreciation of negative facial emotions and confirmed as such for healthy controls in this study. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Facial expressions recognition with an emotion expressive robotic head

    Science.gov (United States)

    Doroftei, I.; Adascalitei, F.; Lefeber, D.; Vanderborght, B.; Doroftei, I. A.

    2016-08-01

    The purpose of this study is to present the preliminary steps in facial expressions recognition with a new version of an expressive social robotic head. So, in a first phase, our main goal was to reach a minimum level of emotional expressiveness in order to obtain nonverbal communication between the robot and human by building six basic facial expressions. To evaluate the facial expressions, the robot was used in some preliminary user studies, among children and adults.

  16. Facial emotion recognition in myotonic dystrophy type 1 correlates with CTG repeat expansion

    Directory of Open Access Journals (Sweden)

    Stefan Winblad

    2009-04-01

    Full Text Available We investigated the ability of patients with myotonic dystrophy type 1 to recognise basic facial emotions. We also explored the relationship between facial emotion recognition, neuropsychological data, personality, and CTG repeat expansion data in the DM-1 group. In total, 50 patients with DM-1 (28 women and 22 men participated, with 41 healthy controls. Recognition of facial emotional expressions was assessed using photographs of basic emotions. A set of tests measured cognition and personality dimensions, and CTG repeat size was quantified in blood lymphocytes. Patients with DM-1 showed impaired recognition of facial emotions compared with controls. A significant negative correlation was found between total score of emotion recognition in a forced choice task and CTG repeat size. Furthermore, specific cognitive functions (vocabulary, visuospatial construction ability, and speed and personality dimensions (reward dependence and cooperativeness correlated with scores on the forced choice emotion recognition task.These findings revealed a CTG repeat dependent facial emotion recognition deficit in the DM-1 group, which was associated with specific neuropsychological functions. Furthermore, a correlation was found between facial emotional recognition ability and personality dimensions associated with sociability. This adds a new clinically relevant dimension in the cognitive deficits associated with DM-1.

  17. Face recognition using facial expression: a novel approach

    Science.gov (United States)

    Singh, Deepak Kumar; Gupta, Priya; Tiwary, U. S.

    2008-04-01

    Facial expressions are undoubtedly the most effective nonverbal communication. The face has always been the equation of a person's identity. The face draws the demarcation line between identity and extinction. Each line on the face adds an attribute to the identity. These lines become prominent when we experience an emotion and these lines do not change completely with age. In this paper we have proposed a new technique for face recognition which focuses on the facial expressions of the subject to identify his face. This is a grey area on which not much light has been thrown earlier. According to earlier researches it is difficult to alter the natural expression. So our technique will be beneficial for identifying occluded or intentionally disguised faces. The test results of the experiments conducted prove that this technique will give a new direction in the field of face recognition. This technique will provide a strong base to the area of face recognition and will be used as the core method for critical defense security related issues.

  18. Facial emotional recognition in schizophrenia: preliminary results of the virtual reality program for facial emotional recognition

    Directory of Open Access Journals (Sweden)

    Teresa Souto

    2013-01-01

    Full Text Available BACKGROUND: Significant deficits in emotional recognition and social perception characterize patients with schizophrenia and have direct negative impact both in inter-personal relationships and in social functioning. Virtual reality, as a methodological resource, might have a high potential for assessment and training skills in people suffering from mental illness. OBJECTIVES: To present preliminary results of a facial emotional recognition assessment designed for patients with schizophrenia, using 3D avatars and virtual reality. METHODS: Presentation of 3D avatars which reproduce images developed with the FaceGen® software and integrated in a three-dimensional virtual environment. Each avatar was presented to a group of 12 patients with schizophrenia and a reference group of 12 subjects without psychiatric pathology. RESULTS: The results show that the facial emotions of happiness and anger are better recognized by both groups and that the major difficulties arise in fear and disgust recognition. Frontal alpha electroencephalography variations were found during the presentation of anger and disgust stimuli among patients with schizophrenia. DISCUSSION: The developed program evaluation module can be of surplus value both for patient and therapist, providing the task execution in a non anxiogenic environment, however similar to the actual experience.

  19. Automatic Recognition of Facial Actions in Spontaneous Expressions

    Directory of Open Access Journals (Sweden)

    Marian Stewart Bartlett

    2006-09-01

    Full Text Available Spontaneous facial expressions differ from posed expressions in both which muscles are moved, and in the dynamics of the movement. Advances in the field of automatic facial expression measurement will require development and assessment on spontaneous behavior. Here we present preliminary results on a task of facial action detection in spontaneous facial expressions. We employ a user independent fully automatic system for real time recognition of facial actions from the Facial Action Coding System (FACS. The system automatically detects frontal faces in the video stream and coded each frame with respect to 20 Action units. The approach applies machine learning methods such as support vector machines and AdaBoost, to texture-based image representations. The output margin for the learned classifiers predicts action unit intensity. Frame-by-frame intensity measurements will enable investigations into facial expression dynamics which were previously intractable by human coding.

  20. People with chronic facial pain perform worse than controls at a facial emotion recognition task, but it is not all about the emotion.

    Science.gov (United States)

    von Piekartz, H; Wallwork, S B; Mohr, G; Butler, D S; Moseley, G L

    2015-04-01

    Alexithymia, or a lack of emotional awareness, is prevalent in some chronic pain conditions and has been linked to poor recognition of others' emotions. Recognising others' emotions from their facial expression involves both emotional and motor processing, but the possible contribution of motor disruption has not been considered. It is possible that poor performance on emotional recognition tasks could reflect problems with emotional processing, motor processing or both. We hypothesised that people with chronic facial pain would be less accurate in recognising others' emotions from facial expressions, would be less accurate in a motor imagery task involving the face, and that performance on both tasks would be positively related. A convenience sample of 19 people (15 females) with chronic facial pain and 19 gender-matched controls participated. They undertook two tasks; in the first task, they identified the facial emotion presented in a photograph. In the second, they identified whether the person in the image had a facial feature pointed towards their left or right side, a well-recognised paradigm to induce implicit motor imagery. People with chronic facial pain performed worse than controls at both tasks (Facially Expressed Emotion Labelling (FEEL) task P facial pain were worse than controls at both the FEEL emotion recognition task and the left/right facial expression task and performance covaried within participants. We propose that disrupted motor processing may underpin or at least contribute to the difficulty that facial pain patients have in emotion recognition and that further research that tests this proposal is warranted.

  1. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  2. Facial emotion recognition in psychiatrists and influences of their therapeutic identification on that ability.

    Science.gov (United States)

    Dalkıran, Mihriban; Gultekin, Gozde; Yuksek, Erhan; Varsak, Nalan; Gul, Hesna; Kıncır, Zeliha; Tasdemir, Akif; Emul, Murat

    2016-08-01

    Although emotional cues like facial emotion expressions seem to be important in social interaction, there is no specific training about emotional cues for psychiatrists. Here, we aimed to investigate psychiatrists' ability of facial emotion recognition and relation with their clinical identification as psychotherapy-psychopharmacology oriented or being adult and childhood-adolescent psychiatrist. Facial Emotion Recognition Test was performed to 130 psychiatrists that were constructed by a set of photographs (happy, sad, fearful, angry, surprised, disgusted and neutral faces) from Ekman and Friesen's. Psychotherapy oriented adult psychiatrists were significantly better in recognizing sad facial emotion (p=.003) than psychopharmacologists while no significant differences were detected according to therapeutic orientation among child-adolescent psychiatrists (for each, p>.05). Adult psychiatrists were significantly better in recognizing fearful (p=.012) and disgusted (p=.003) facial emotions than child-adolescent psychiatrists while the latter were better in recognizing angry facial emotion (p=.008). For the first time, we have shown some differences on psychiatrists' facial emotion recognition ability according to therapeutic identification and being adult or child-adolescent psychiatrist. It would be valuable to investigate how these differences or training the ability of facial emotion recognition would affect the quality of patient-clinician interaction and treatment related outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Face recognition using improved-LDA with facial combined feature

    Institute of Scientific and Technical Information of China (English)

    Dake Zhou; Xin Yang; Ningsong Peng

    2005-01-01

    @@ Face recognition subjected to various conditions is a challenging task. This paper presents a combined feature improved Fisher classifier method for face recognition. Both of the facial holistic information and local information are used for face representation. In addition, the improved linear discriminant analysis (I-LDA) is employed for good generalization capability. Experiments show that the method is not only robust to moderate changes of illumination, pose and facial expression but also superior to the traditional methods, such as eigenfaces and Fisherfaces.

  4. Recognition of Face and Emotional Facial Expressions in Autism

    Directory of Open Access Journals (Sweden)

    Muhammed Tayyib Kadak

    2013-03-01

    Full Text Available Autism is a genetically transferred neurodevelopmental disorder characterized by severe and permanent deficits in many interpersonal relation areas like communication, social interaction and emotional responsiveness. Patients with autism have deficits in face recognition, eye contact and recognition of emotional expression. Both recognition of face and expression of facial emotion carried on face processing. Structural and functional impairment in fusiform gyrus, amygdala, superior temporal sulcus and other brain regions lead to deficits in recognition of face and facial emotion. Therefore studies suggest that face processing deficits resulted in problems in areas of social interaction and emotion in autism. Studies revealed that children with autism had problems in recognition of facial expression and used mouth region more than eye region. It was also shown that autistic patients interpreted ambiguous expressions as negative emotion. In autism, deficits related in various stages of face processing like detection of gaze, face identity, recognition of emotional expression were determined, so far. Social interaction impairments in autistic spectrum disorders originated from face processing deficits during the periods of infancy, childhood and adolescence. Recognition of face and expression of facial emotion could be affected either automatically by orienting towards faces after birth, or by “learning” processes in developmental periods such as identity and emotion processing. This article aimed to review neurobiological basis of face processing and recognition of emotional facial expressions during normal development and in autism.

  5. Facial emotion recognition impairments in individuals with HIV.

    Science.gov (United States)

    Clark, Uraina S; Cohen, Ronald A; Westbrook, Michelle L; Devlin, Kathryn N; Tashima, Karen T

    2010-11-01

    Characterized by frontostriatal dysfunction, human immunodeficiency virus (HIV) is associated with cognitive and psychiatric abnormalities. Several studies have noted impaired facial emotion recognition abilities in patient populations that demonstrate frontostriatal dysfunction; however, facial emotion recognition abilities have not been systematically examined in HIV patients. The current study investigated facial emotion recognition in 50 nondemented HIV-seropositive adults and 50 control participants relative to their performance on a nonemotional landscape categorization control task. We examined the relation of HIV-disease factors (nadir and current CD4 levels) to emotion recognition abilities and assessed the psychosocial impact of emotion recognition abnormalities. Compared to control participants, HIV patients performed normally on the control task but demonstrated significant impairments in facial emotion recognition, specifically for fear. HIV patients reported greater psychosocial impairments, which correlated with increased emotion recognition difficulties. Lower current CD4 counts were associated with poorer anger recognition. In summary, our results indicate that chronic HIV infection may contribute to emotion processing problems among HIV patients. We suggest that disruptions of frontostriatal structures and their connections with cortico-limbic networks may contribute to emotion recognition abnormalities in HIV. Our findings also highlight the significant psychosocial impact that emotion recognition abnormalities have on individuals with HIV.

  6. Facial Recognition using OpenCV

    Directory of Open Access Journals (Sweden)

    Valentin Petrut Suciu

    2012-03-01

    Full Text Available

    The growing interest in computer vision of the past decade. Fueled by the steady doubling rate of computing power every 13 months, face detection and recognition has transcended from an esoteric to a popular area of research in computer vision and one of the better and successful applications of image analysis and algorithm based understanding. Because of the intrinsic nature of the problem, computer vision is not only a computer science area of research, but also the object of neuro-scientific and psychological studies, mainly because of the general opinion that advances in computer image processing and understanding research will provide insights into how our brain work and vice versa.

    Because of general curiosity and interest in the matter, the author has proposed to create an application that would allow user access to a particular machine based on an in-depth analysis of a person’s facial features. This application will be developed using Intel’s open source computer vision project, OpenCV and Microsoft’s .NET framework.

  7. Facial Recognition using OpenCV

    Directory of Open Access Journals (Sweden)

    Shervin Emami

    2012-03-01

    Full Text Available The growing interest in computer vision of the past decade. Fueled by the steady doubling rate of computing power every 13 months, face detection and recognition has transcended from an esoteric to a popular area of research in computer vision and one of the better and successful applications of image analysis and algorithm based understanding. Because of the intrinsic nature of the problem, computer vision is not only a computer science area of research, but also the object of neuro-scientific and psychological studies, mainly because of the general opinion that advances in computer image processing and understanding research will provide insights into how our brain work and vice versa. Because of general curiosity and interest in the matter, the author has proposed to create an application that would allow user access to a particular machine based on an in-depth analysis of a person’s facial features. This application will be developed using Intel’s open source computer vision project, OpenCV and Microsoft’s .NET framework.

  8. Comparison of Emotion Recognition from Facial Expression and Music

    OpenAIRE

    Gašpar, Tina; Labor, Marina; Jurić, Iva; Dumančić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recogni...

  9. Comparison of Emotion Recognition from Facial Expression and Music

    OpenAIRE

    Gašpar, Tina; Labor, Marina; Jurić, Iva; Dumančić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recogni...

  10. Active AU Based Patch Weighting for Facial Expression Recognition

    Science.gov (United States)

    Xie, Weicheng; Shen, Linlin; Yang, Meng; Lai, Zhihui

    2017-01-01

    Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU) weighting and patch weight optimization is proposed to represent the specificity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+) databases, respectively. Better cross-database performance has also been observed. PMID:28146094

  11. Active AU Based Patch Weighting for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Weicheng Xie

    2017-01-01

    Full Text Available Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU weighting and patch weight optimization is proposed to represent the specificity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+ databases, respectively. Better cross-database performance has also been observed.

  12. A motivational determinant of facial emotion recognition: regulatory focus affects recognition of emotions in faces.

    Science.gov (United States)

    Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G; Scheiter, Katharina; Jarodzka, Halszka

    2014-01-01

    Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition.

  13. Automatic Facial Expression Recognition Based on Hybrid Approach

    Directory of Open Access Journals (Sweden)

    Ali K. K. Bermani

    2012-12-01

    Full Text Available The topic of automatic recognition of facial expressions deduce a lot of researchers in the late last century and has increased a great interest in the past few years. Several techniques have emerged in order to improve the efficiency of the recognition by addressing problems in face detection and extraction features in recognizing expressions. This paper has proposed automatic system for facial expression recognition which consists of hybrid approach in feature extraction phase which represent a combination between holistic and analytic approaches by extract 307 facial expression features (19 features by geometric, 288 feature by appearance. Expressions recognition is performed by using radial basis function (RBF based on artificial neural network to recognize the six basic emotions (anger, fear, disgust, happiness, surprise, sadness in addition to the natural.The system achieved recognition rate 97.08% when applying on person-dependent database and 93.98% when applying on person-independent.

  14. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    Science.gov (United States)

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  15. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    Science.gov (United States)

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  16. Heartbeat Signal from Facial Video for Biometric Recognition

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2015-01-01

    Different biometric traits such as face appearance and heartbeat signal from Electrocardiogram (ECG)/Phonocardiogram (PCG) are widely used in the human identity recognition. Recent advances in facial video based measurement of cardio-physiological parameters such as heartbeat rate, respiratory rate......, and blood volume pressure provide the possibility of extracting heartbeat signal from facial video instead of using obtrusive ECG or PCG sensors in the body. This paper proposes the Heartbeat Signal from Facial Video (HSFV) as a new biometric trait for human identity recognition, for the first time...

  17. Facial emotion recognition ability: psychiatry nurses versus nurses from other departments.

    Science.gov (United States)

    Gultekin, Gozde; Kincir, Zeliha; Kurt, Merve; Catal, Yasir; Acil, Asli; Aydin, Aybike; Özcan, Mualla; Delikkaya, Busra N; Kacar, Selma; Emul, Murat

    2016-12-01

    Facial emotion recognition is a basic element in non-verbal communication. Although some researchers have shown that recognizing facial expressions may be important in the interaction between doctors and patients, there are no studies concerning facial emotion recognition in nurses. Here, we aimed to investigate facial emotion recognition ability in nurses and compare the abilities between nurses from psychiatry and other departments. In this cross-sectional study, sixty seven nurses were divided into two groups according to their departments: psychiatry (n=31); and, other departments (n=36). A Facial Emotion Recognition Test, constructed from a set of photographs from Ekman and Friesen's book "Pictures of Facial Affect", was administered to all participants. In whole group, the highest mean accuracy rate of recognizing facial emotion was the happy (99.14%) while the lowest accurately recognized facial expression was fear (47.71%). There were no significant differences between two groups among mean accuracy rates in recognizing happy, sad, fear, angry, surprised facial emotion expressions (for all, p>0.05). The ability of recognizing disgusted and neutral facial emotions tended to be better in other nurses than psychiatry nurses (p=0.052 and p=0.053, respectively) Conclusion: This study was the first that revealed indifference in the ability of FER between psychiatry nurses and non-psychiatry nurses. In medical education curricula throughout the world, no specific training program is scheduled for recognizing emotional cues of patients. We considered that improving the ability of recognizing facial emotion expression in medical stuff might be beneficial in reducing inappropriate patient-medical stuff interaction.

  18. [Developmental change in facial recognition by premature infants during infancy].

    Science.gov (United States)

    Konishi, Yukihiko; Kusaka, Takashi; Nishida, Tomoko; Isobe, Kenichi; Itoh, Susumu

    2014-09-01

    Premature infants are thought to be at increased risk for developmental disorders. We evaluated facial recognition by premature infants during early infancy, as this ability has been reported to be impaired commonly in developmentally disabled children. In premature infants and full-term infants at the age of 4 months (4 corrected months for premature infants), visual behaviors while performing facial recognition tasks were determined and analyzed using an eye-tracking system (Tobii T60 manufactured by Tobii Technologics, Sweden). Both types of infants had a preference towards normal facial expressions; however, no preference towards the upper face was observed in premature infants. Our study suggests that facial recognition ability in premature infants may develop differently from that in full-term infants.

  19. Facial expression recognition and emotional regulation in narcolepsy with cataplexy.

    Science.gov (United States)

    Bayard, Sophie; Croisier Langenier, Muriel; Dauvilliers, Yves

    2013-04-01

    Cataplexy is pathognomonic of narcolepsy with cataplexy, and defined by a transient loss of muscle tone triggered by strong emotions. Recent researches suggest abnormal amygdala function in narcolepsy with cataplexy. Emotion treatment and emotional regulation strategies are complex functions involving cortical and limbic structures, like the amygdala. As the amygdala has been shown to play a role in facial emotion recognition, we tested the hypothesis that patients with narcolepsy with cataplexy would have impaired recognition of facial emotional expressions compared with patients affected with central hypersomnia without cataplexy and healthy controls. We also aimed to determine whether cataplexy modulates emotional regulation strategies. Emotional intensity, arousal and valence ratings on Ekman faces displaying happiness, surprise, fear, anger, disgust, sadness and neutral expressions of 21 drug-free patients with narcolepsy with cataplexy were compared with 23 drug-free sex-, age- and intellectual level-matched adult patients with hypersomnia without cataplexy and 21 healthy controls. All participants underwent polysomnography recording and multiple sleep latency tests, and completed depression, anxiety and emotional regulation questionnaires. Performance of patients with narcolepsy with cataplexy did not differ from patients with hypersomnia without cataplexy or healthy controls on both intensity rating of each emotion on its prototypical label and mean ratings for valence and arousal. Moreover, patients with narcolepsy with cataplexy did not use different emotional regulation strategies. The level of depressive and anxious symptoms in narcolepsy with cataplexy did not differ from the other groups. Our results demonstrate that narcolepsy with cataplexy accurately perceives and discriminates facial emotions, and regulates emotions normally. The absence of alteration of perceived affective valence remains a major clinical interest in narcolepsy with cataplexy

  20. Facial Emotion Recognition in Child Psychiatry: A Systematic Review

    Science.gov (United States)

    Collin, Lisa; Bindra, Jasmeet; Raju, Monika; Gillberg, Christopher; Minnis, Helen

    2013-01-01

    This review focuses on facial affect (emotion) recognition in children and adolescents with psychiatric disorders other than autism. A systematic search, using PRISMA guidelines, was conducted to identify original articles published prior to October 2011 pertaining to face recognition tasks in case-control studies. Used in the qualitative…

  1. Facial Emotion Recognition in Child Psychiatry: A Systematic Review

    Science.gov (United States)

    Collin, Lisa; Bindra, Jasmeet; Raju, Monika; Gillberg, Christopher; Minnis, Helen

    2013-01-01

    This review focuses on facial affect (emotion) recognition in children and adolescents with psychiatric disorders other than autism. A systematic search, using PRISMA guidelines, was conducted to identify original articles published prior to October 2011 pertaining to face recognition tasks in case-control studies. Used in the qualitative…

  2. Novel dynamic Bayesian networks for facial action element recognition and understanding

    Science.gov (United States)

    Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong

    2011-12-01

    In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.

  3. Deficits in recognition, identification, and discrimination of facial emotions in patients with bipolar disorder

    Directory of Open Access Journals (Sweden)

    Adolfo Benito

    2013-12-01

    Full Text Available Objective: To analyze the recognition, identification, and discrimination of facial emotions in a sample of outpatients with bipolar disorder (BD. Methods: Forty-four outpatients with diagnosis of BD and 48 matched control subjects were selected. Both groups were assessed with tests for recognition (Emotion Recognition-40 - ER40, identification (Facial Emotion Identification Test - FEIT, and discrimination (Facial Emotion Discrimination Test - FEDT of facial emotions, as well as a theory of mind (ToM verbal test (Hinting Task. Differences between groups were analyzed, controlling the influence of mild depressive and manic symptoms. Results: Patients with BD scored significantly lower than controls on recognition (ER40, identification (FEIT, and discrimination (FEDT of emotions. Regarding the verbal measure of ToM, a lower score was also observed in patients compared to controls. Patients with mild syndromal depressive symptoms obtained outcomes similar to patients in euthymia. A significant correlation between FEDT scores and global functioning (measured by the Functioning Assessment Short Test, FAST was found. Conclusions: These results suggest that, even in euthymia, patients with BD experience deficits in recognition, identification, and discrimination of facial emotions, with potential functional implications.

  4. Facial identity recognition in the broader autism phenotype.

    Directory of Open Access Journals (Sweden)

    C Ellie Wilson

    Full Text Available BACKGROUND: The 'broader autism phenotype' (BAP refers to the mild expression of autistic-like traits in the relatives of individuals with autism spectrum disorder (ASD. Establishing the presence of ASD traits provides insight into which traits are heritable in ASD. Here, the ability to recognise facial identity was tested in 33 parents of ASD children. METHODOLOGY AND RESULTS: In experiment 1, parents of ASD children completed the Cambridge Face Memory Test (CFMT, and a questionnaire assessing the presence of autistic personality traits. The parents, particularly the fathers, were impaired on the CFMT, but there were no associations between face recognition ability and autistic personality traits. In experiment 2, parents and probands completed equivalent versions of a simple test of face matching. On this task, the parents were not impaired relative to typically developing controls, however the proband group was impaired. Crucially, the mothers' face matching scores correlated with the probands', even when performance on an equivalent test of matching non-face stimuli was controlled for. CONCLUSIONS AND SIGNIFICANCE: Components of face recognition ability are impaired in some relatives of ASD individuals. Results suggest that face recognition skills are heritable in ASD, and genetic and environmental factors accounting for the pattern of heritability are discussed. In general, results demonstrate the importance of assessing the skill level in the proband when investigating particular characteristics of the BAP.

  5. Facial emotion recognition in bipolar disorder: a critical review.

    Science.gov (United States)

    Rocca, Cristiana Castanho de Almeida; Heuvel, Eveline van den; Caetano, Sheila C; Lafer, Beny

    2009-06-01

    Literature review of the controlled studies in the last 18 years in emotion recognition deficits in bipolar disorder. A bibliographical research of controlled studies with samples larger than 10 participants from 1990 to June 2008 was completed in Medline, Lilacs, PubMed and ISI. Thirty-two papers were evaluated. Euthymic bipolar disorder presented impairment in recognizing disgust and fear. Manic BD showed difficult to recognize fearful and sad faces. Pediatric bipolar disorder patients and children at risk presented impairment in their capacity to recognize emotions in adults and children faces. Bipolar disorder patients were more accurate in recognizing facial emotions than schizophrenic patients. Bipolar disorder patients present impaired recognition of disgust, fear and sadness that can be partially attributed to mood-state. In mania, they have difficult to recognize fear and disgust. Bipolar disorder patients were more accurate in recognizing emotions than depressive and schizophrenic patients. Bipolar disorder children present a tendency to misjudge extreme facial expressions as being moderate or mild in intensity. Affective and cognitive deficits in bipolar disorder vary according to the mood states. Follow-up studies re-testing bipolar disorder patients after recovery are needed in order to investigate if these abnormalities reflect a state or trait marker and can be considered an endophenotype. Future studies should aim at standardizing task and designs.

  6. Facial Emotion Recognition Using Context Based Multimodal Approach

    Directory of Open Access Journals (Sweden)

    Priya Metri

    2011-12-01

    Full Text Available Emotions play a crucial role in person to person interaction. In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers. The ability to understand human emotions is desirable for the computer in several applications especially by observing facial expressions. This paper explores a ways of human-computer interaction that enable the computer to be more aware of the user’s emotional expressions we present a approach for the emotion recognition from a facial expression, hand and body posture. Our model uses multimodal emotion recognition system in which we use two different models for facial expression recognition and for hand and body posture recognition and then combining the result of both classifiers using a third classifier which give the resulting emotion . Multimodal system gives more accurate result than a signal or bimodal system

  7. Facial Gesture Recognition Using Correlation And Mahalanobis Distance

    CERN Document Server

    Kapoor, Supriya; Bhatia, Rahul

    2010-01-01

    Augmenting human computer interaction with automated analysis and synthesis of facial expressions is a goal towards which much research effort has been devoted recently. Facial gesture recognition is one of the important component of natural human-machine interfaces; it may also be used in behavioural science, security systems and in clinical practice. Although humans recognise facial expressions virtually without effort or delay, reliable expression recognition by machine is still a challenge. The face expression recognition problem is challenging because different individuals display the same expression differently. This paper presents an overview of gesture recognition in real time using the concepts of correlation and Mahalanobis distance.We consider the six universal emotional categories namely joy, anger, fear, disgust, sadness and surprise.

  8. Temporal Lobe Structures and Facial Emotion Recognition in Schizophrenia Patients and Nonpsychotic Relatives

    Science.gov (United States)

    Goghari, Vina M.; MacDonald, Angus W.; Sponheim, Scott R.

    2011-01-01

    Temporal lobe abnormalities and emotion recognition deficits are prominent features of schizophrenia and appear related to the diathesis of the disorder. This study investigated whether temporal lobe structural abnormalities were associated with facial emotion recognition deficits in schizophrenia and related to genetic liability for the disorder. Twenty-seven schizophrenia patients, 23 biological family members, and 36 controls participated. Several temporal lobe regions (fusiform, superior temporal, middle temporal, amygdala, and hippocampus) previously associated with face recognition in normative samples and found to be abnormal in schizophrenia were evaluated using volumetric analyses. Participants completed a facial emotion recognition task and an age recognition control task under time-limited and self-paced conditions. Temporal lobe volumes were tested for associations with task performance. Group status explained 23% of the variance in temporal lobe volume. Left fusiform gray matter volume was decreased by 11% in patients and 7% in relatives compared with controls. Schizophrenia patients additionally exhibited smaller hippocampal and middle temporal volumes. Patients were unable to improve facial emotion recognition performance with unlimited time to make a judgment but were able to improve age recognition performance. Patients additionally showed a relationship between reduced temporal lobe gray matter and poor facial emotion recognition. For the middle temporal lobe region, the relationship between greater volume and better task performance was specific to facial emotion recognition and not age recognition. Because schizophrenia patients exhibited a specific deficit in emotion recognition not attributable to a generalized impairment in face perception, impaired emotion recognition may serve as a target for interventions. PMID:20484523

  9. Frame-Based Facial Expression Recognition Using Geometrical Features

    Directory of Open Access Journals (Sweden)

    Anwar Saeed

    2014-01-01

    Full Text Available To improve the human-computer interaction (HCI to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness, with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.

  10. Theory of mind and recognition of facial emotion in dementia: challenge to current concepts.

    Science.gov (United States)

    Freedman, Morris; Binns, Malcolm A; Black, Sandra E; Murphy, Cara; Stuss, Donald T

    2013-01-01

    Current literature suggests that theory of mind (ToM) and recognition of facial emotion are impaired in behavioral variant frontotemporal dementia (bvFTD). In contrast, studies suggest that ToM is spared in Alzheimer disease (AD). However, there is controversy whether recognition of emotion in faces is impaired in AD. This study challenges the concepts that ToM is preserved in AD and that recognition of facial emotion is impaired in bvFTD. ToM, recognition of facial emotion, and identification of emotions associated with video vignettes were studied in bvFTD, AD, and normal controls. ToM was assessed using false-belief and visual perspective-taking tasks. Identification of facial emotion was tested using Ekman and Friesen's pictures of facial affect. After adjusting for relevant covariates, there were significant ToM deficits in bvFTD and AD compared with controls, whereas neither group was impaired in the identification of emotions associated with video vignettes. There was borderline impairment in recognizing angry faces in bvFTD. Patients with AD showed significant deficits on false belief and visual perspective taking, and bvFTD patients were impaired on second-order false belief. We report novel findings challenging the concepts that ToM is spared in AD and that recognition of facial emotion is impaired in bvFTD.

  11. Sad and happy facial emotion recognition impairment in progressive supranuclear palsy in comparison with Parkinson's disease.

    Science.gov (United States)

    Pontieri, Francesco E; Assogna, Francesca; Stefani, Alessandro; Pierantozzi, Mariangela; Meco, Giuseppe; Benincasa, Dario; Colosimo, Carlo; Caltagirone, Carlo; Spalletta, Gianfranco

    2012-08-01

    The severity of motor and non-motor symptoms of progressive supranuclear palsy (PSP) has a profound impact on social interactions of affected individuals and may, consequently, contribute to alter emotion recognition. Here we investigated facial emotion recognition impairment in PSP with respect to Parkinson's disease (PD), with the primary aim of outlining the differences between the two disorders. Moreover, we applied an intensity-dependent paradigm to examine the different threshold of encoding emotional faces in PSP and PD. The Penn emotion recognition test (PERT) was used to assess facial emotion recognition ability in PSP and PD patients. The 2 groups were matched for age, disease duration, global cognition, depression, anxiety, and daily L-Dopa intake. PSP patients displayed significantly lower recognition of sad and happy emotional faces with respect to PD ones. This applied to global recognition, as well as to low-intensity and high-intensity facial emotion recognition. These results indicate specific impairment of recognition of sad and happy facial emotions in PSP with respect to PD patients. The differences may depend upon diverse involvement of cortical-subcortical loops integrating emotional states and cognition between the two conditions, and might represent a neuropsychological correlate of the apathetic syndrome frequently encountered in PSP.

  12. Comparison of emotion recognition from facial expression and music.

    Science.gov (United States)

    Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.

  13. Ventromedial prefrontal cortex mediates visual attention during facial emotion recognition.

    Science.gov (United States)

    Wolf, Richard C; Philippi, Carissa L; Motzkin, Julian C; Baskaya, Mustafa K; Koenigs, Michael

    2014-06-01

    The ventromedial prefrontal cortex is known to play a crucial role in regulating human social and emotional behaviour, yet the precise mechanisms by which it subserves this broad function remain unclear. Whereas previous neuropsychological studies have largely focused on the role of the ventromedial prefrontal cortex in higher-order deliberative processes related to valuation and decision-making, here we test whether ventromedial prefrontal cortex may also be critical for more basic aspects of orienting attention to socially and emotionally meaningful stimuli. Using eye tracking during a test of facial emotion recognition in a sample of lesion patients, we show that bilateral ventromedial prefrontal cortex damage impairs visual attention to the eye regions of faces, particularly for fearful faces. This finding demonstrates a heretofore unrecognized function of the ventromedial prefrontal cortex-the basic attentional process of controlling eye movements to faces expressing emotion.

  14. Dopamine and light: effects on facial emotion recognition.

    Science.gov (United States)

    Cawley, Elizabeth; Tippler, Maria; Coupland, Nicholas J; Benkelfat, Chawki; Boivin, Diane B; Aan Het Rot, Marije; Leyton, Marco

    2017-06-01

    Bright light can affect mood states and social behaviours. Here, we tested potential interacting effects of light and dopamine on facial emotion recognition. Participants were 32 women with subsyndromal seasonal affective disorder tested in either a bright (3000 lux) or dim light (10 lux) environment. Each participant completed two test days, one following the ingestion of a phenylalanine/tyrosine-deficient mixture and one with a nutritionally balanced control mixture, both administered double blind in a randomised order. Approximately four hours post-ingestion participants completed a self-report measure of mood followed by a facial emotion recognition task. All testing took place between November and March when seasonal symptoms would be present. Following acute phenylalanine/tyrosine depletion (APTD), compared to the nutritionally balanced control mixture, participants in the dim light condition were more accurate at recognising sad faces, less likely to misclassify them, and faster at responding to them, effects that were independent of changes in mood. Effects of APTD on responses to sad faces in the bright light group were less consistent. There were no APTD effects on responses to other emotions, with one exception: a significant light × mixture interaction was seen for the reaction time to fear, but the pattern of effect was not predicted a priori or seen on other measures. Together, the results suggest that the processing of sad emotional stimuli might be greater when dopamine transmission is low. Bright light exposure, used for the treatment of both seasonal and non-seasonal mood disorders, might produce some of its benefits by preventing this effect.

  15. Facial Expression Recognition Teaching to Preschoolers with Autism

    DEFF Research Database (Denmark)

    Christinaki, Eirini; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2013-01-01

    The recognition of facial expressions is important for the perception of emotions. Understanding emotions is essential in human communication and social interaction. Children with autism have been reported to exhibit deficits in the recognition of affective expressions. Their difficulties...... for teaching emotion recognition from facial expressions should occur as early as possible in order to be successful and to have a positive effect. It is claimed that Serious Games can be very effective in the areas of therapy and education for children with autism. However, those computer interventions...... an educational computer game, which provides physical interaction by employing natural user interface (NUI), we aim to support early intervention and to foster facial expression learning....

  16. Non-Cooperative Facial Recognition Video Dataset Collection Plan

    Energy Technology Data Exchange (ETDEWEB)

    Kimura, Marcia L.; Erikson, Rebecca L.; Lombardo, Nicholas J.

    2013-08-31

    The Pacific Northwest National Laboratory (PNNL) will produce a non-cooperative (i.e. not posing for the camera) facial recognition video data set for research purposes to evaluate and enhance facial recognition systems technology. The aggregate data set consists of 1) videos capturing PNNL role players and public volunteers in three key operational settings, 2) photographs of the role players for enrolling in an evaluation database, and 3) ground truth data that documents when the role player is within various camera fields of view. PNNL will deliver the aggregate data set to DHS who may then choose to make it available to other government agencies interested in evaluating and enhancing facial recognition systems. The three operational settings that will be the focus of the video collection effort include: 1) unidirectional crowd flow 2) bi-directional crowd flow, and 3) linear and/or serpentine queues.

  17. Facial Expression Recognition of Various Internal States via Manifold Learning

    Institute of Scientific and Technical Information of China (English)

    Young-Suk Shin

    2009-01-01

    Emotions are becoming increasingly important in human-centered interaction architectures. Recognition of facial expressions, which are central to human-computer interactions, seems natural and desirable. However, facial expressions include mixed emotions, continuous rather than discrete, which vary from moment to moment. This paper represents a novel method of recognizing facial expressions of various internal states via manifold learning, to achieve the aim of humancentered interaction studies. A critical review of widely used emotion models is described, then, facial expression features of various internal states via the locally linear embedding (LLE) are extracted. The recognition of facial expressions is created with the pleasure-displeasure and arousal-sleep dimensions in a two-dimensional model of emotion. The recognition result of various internal state expressions that mapped to the embedding space via the LLE algorithm can effectively represent the structural nature of the two-dimensional model of emotion. Therefore our research has established that the relationship between facial expressions of various internal states can be elaborated in the two-dimensional model of emotion, via the locally linear embedding algorithm.

  18. Violent video game play impacts facial emotion recognition.

    Science.gov (United States)

    Kirsh, Steven J; Mounts, Jeffrey R W

    2007-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent video game play. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph. Typically, happy faces are identified faster than angry faces (the happy-face advantage). Results indicated that playing a violent video game led to a reduction in the happy face advantage. Implications of these findings are discussed with respect to the current models of aggressive behavior.

  19. Recognition of facial affect in girls with conduct disorder.

    Science.gov (United States)

    Pajer, Kathleen; Leininger, Lisa; Gardner, William

    2010-02-28

    Impaired recognition of facial affect has been reported in youths and adults with antisocial behavior. However, few of these studies have examined subjects with the psychiatric disorders associated with antisocial behavior, and there are virtually no data on females. Our goal was to determine if facial affect recognition was impaired in adolescent girls with conduct disorder (CD). Performance on the Ekman Pictures of Facial Affect (POFA) task was compared in 35 girls with CD (mean age of 17.9 years+/-0.95; 38.9% African-American) and 30 girls who had no lifetime history of psychiatric disorder (mean age of 17.6 years+/-0.77; 30% African-American). Forty-five slides representing the six emotions in the POFA were presented one at a time; stimulus duration was 5s. Multivariate analyses indicated that CD vs. control status was not significantly associated with the total number of correct answers nor the number of correct answers for any specific emotion. Effect sizes were all considered small. Within-CD analyses did not demonstrate a significant effect for aggressive antisocial behavior on facial affect recognition. Our findings suggest that girls with CD are not impaired in facial affect recognition. However, we did find that girls with a history of trauma/neglect made a greater number of errors in recognizing fearful faces. Explanations for these findings are discussed and implications for future research presented. 2009 Elsevier B.V. All rights reserved.

  20. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    Science.gov (United States)

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc.

  1. GENDER DIFFERENCES IN THE RECOGNITION OF FACIAL EXPRESSIONS OF EMOTION

    Directory of Open Access Journals (Sweden)

    CARLOS FELIPE PARDO-VÉLEZ

    2003-07-01

    Full Text Available Gender differences in the recognition of facial expressions of anger, happiness and sadness wereresearched in students 18-25 years of age. A reaction time procedure was used, and the percentage ofcorrect answers when recognizing was also measured. Though the work hypothesis expected genderdifferences in facial expression recognition, results suggest that these differences are not significant at alevel of 0.05%. Statistical analysis shows a greater easiness (at a non-significant level for women torecognize happiness expressions, and for men to recognize anger expressions. The implications ofthese data are discussed, and possible extensions of this investigation in terms of sample size andcollege major of the participants.

  2. Efficient Facial Expression and Face Recognition using Ranking Method

    Directory of Open Access Journals (Sweden)

    Murali Krishna kanala

    2015-06-01

    Full Text Available Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However, these facial expressions may be difficult to detect to the untrained eye. In this paper we implements facial expression recognition techniques using Ranking Method. The human face plays an important role in our social interaction, conveying people's identity. Using human face as a key to security, the biometrics face recognition technology has received significant attention in the past several years. Experiments are performed using standard database like surprise, sad and happiness. The universally accepted three principal emotions to be recognized are: surprise, sad and happiness along with neutral.

  3. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    Science.gov (United States)

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  4. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    Science.gov (United States)

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  5. Positive and negative facial emotional expressions: the effect on infants' and children's facial identity recognition

    OpenAIRE

    Brenna,

    2013-01-01

    Aim of the present study was to investigate the origin and the development of the interdipendence between identity recognition and facial emotional expression processing, suggested by recent models on face processing (Calder & Young, 2005) and supported by outcomes on adults (e.g. Baudouin, Gilibert, Sansone, & Tiberghien, 2000; Schweinberger & Soukup, 1998). Particularly the effect of facial emotional expressions on infants’ and children’s ability to recognize identity of a face was explored...

  6. Facial emotion recognition in adolescents with personality pathology.

    Science.gov (United States)

    Berenschot, Fleur; van Aken, Marcel A G; Hessels, Christel; de Castro, Bram Orobio; Pijl, Ysbrand; Montagne, Barbara; van Voorst, Guus

    2014-07-01

    It has been argued that a heightened emotional sensitivity interferes with the cognitive processing of facial emotion recognition and may explain the intensified emotional reactions to external emotional stimuli of adults with personality pathology, such as borderline personality disorder (BPD). This study examines if and how deviations in facial emotion recognition also occur in adolescents with personality pathology. Forty-two adolescents with personality pathology, 111 healthy adolescents and 28 psychiatric adolescents without personality pathology completed the Emotion Recognition Task, measuring their accuracy and sensitivity in recognizing positive and negative emotion expressions presented in several, morphed, expression intensities. Adolescents with personality pathology showed an enhanced recognition accuracy of facial emotion expressions compared to healthy adolescents and clients with various Axis-I psychiatric diagnoses. They were also more sensitive to less intensive expressions of emotions than clients with various Axis-I psychiatric diagnoses, but not more than healthy adolescents. As has been shown in research on adults with BPD, adolescents with personality pathology show enhanced facial emotion recognition.

  7. Facial recognition technology safeguards Beijing Olympics

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    @@ To ensure the safety of spectators and athletes at the biggest-ever Olympic Games, automation experts from CAS have developed China's first system to identify individuals by their facial features, and successfully applied it to the opening night security check on 8 August in Beijing.

  8. Enhanced recognition of facial recognitions of disgust in opiate users.

    OpenAIRE

    Martin, L.

    2005-01-01

    This literature review focuses on the research relating to facial expressions of emotion, first addressing the question of what they are and what role they play, before going on to review the mechanisms by which they are recognised in others. It then considers the psychiatric and drug-using populations in which the ability to recognise facial expressions is compromised, and how this corresponds to the social behaviour that characterises these groups. Finally, this review will focus on one par...

  9. Automatic facial feature extraction and expression recognition based on neural network

    CERN Document Server

    Khandait, S P; Khandait, P D

    2012-01-01

    In this paper, an approach to the problem of automatic facial feature extraction from a still frontal posed image and classification and recognition of facial expression and hence emotion and mood of a person is presented. Feed forward back propagation neural network is used as a classifier for classifying the expressions of supplied face into seven basic categories like surprise, neutral, sad, disgust, fear, happy and angry. For face portion segmentation and localization, morphological image processing operations are used. Permanent facial features like eyebrows, eyes, mouth and nose are extracted using SUSAN edge detection operator, facial geometry, edge projection analysis. Experiments are carried out on JAFFE facial expression database and gives better performance in terms of 100% accuracy for training set and 95.26% accuracy for test set.

  10. The first facial expression recognition and analysis challenge

    NARCIS (Netherlands)

    Valstar, Michel F.; Jiang, Bihan; Mehu, Marc; Pantic, Maja; Scherer, Klaus

    2011-01-01

    Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly u

  11. The first facial expression recognition and analysis challenge

    NARCIS (Netherlands)

    Valstar, Michel F.; Jiang, Bihan; Mehu, Marc; Pantic, Maja; Scherer, Klaus

    Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly

  12. Intelligent Facial Recognition Systems: Technology advancements for security applications

    Energy Technology Data Exchange (ETDEWEB)

    Beer, C.L.

    1993-07-01

    Insider problems such as theft and sabotage can occur within the security and surveillance realm of operations when unauthorized people obtain access to sensitive areas. A possible solution to these problems is a means to identify individuals (not just credentials or badges) in a given sensitive area and provide full time personnel accountability. One approach desirable at Department of Energy facilities for access control and/or personnel identification is an Intelligent Facial Recognition System (IFRS) that is non-invasive to personnel. Automatic facial recognition does not require the active participation of the enrolled subjects, unlike most other biological measurement (biometric) systems (e.g., fingerprint, hand geometry, or eye retinal scan systems). It is this feature that makes an IFRS attractive for applications other than access control such as emergency evacuation verification, screening, and personnel tracking. This paper discusses current technology that shows promising results for DOE and other security applications. A survey of research and development in facial recognition identified several companies and universities that were interested and/or involved in the area. A few advanced prototype systems were also identified. Sandia National Laboratories is currently evaluating facial recognition systems that are in the advanced prototype stage. The initial application for the evaluation is access control in a controlled environment with a constant background and with cooperative subjects. Further evaluations will be conducted in a less controlled environment, which may include a cluttered background and subjects that are not looking towards the camera. The outcome of the evaluations will help identify areas of facial recognition systems that need further development and will help to determine the effectiveness of the current systems for security applications.

  13. Visual scan paths and recognition of facial identity in autism spectrum disorder and typical development.

    Science.gov (United States)

    Wilson, C Ellie; Palermo, Romina; Brock, Jon

    2012-01-01

    Previous research suggests that many individuals with autism spectrum disorder (ASD) have impaired facial identity recognition, and also exhibit abnormal visual scanning of faces. Here, two hypotheses accounting for an association between these observations were tested: i) better facial identity recognition is associated with increased gaze time on the Eye region; ii) better facial identity recognition is associated with increased eye-movements around the face. Eye-movements of 11 children with ASD and 11 age-matched typically developing (TD) controls were recorded whilst they viewed a series of faces, and then completed a two alternative forced-choice recognition memory test for the faces. Scores on the memory task were standardized according to age. In both groups, there was no evidence of an association between the proportion of time spent looking at the Eye region of faces and age-standardized recognition performance, thus the first hypothesis was rejected. However, the 'Dynamic Scanning Index'--which was incremented each time the participant saccaded into and out of one of the core-feature interest areas--was strongly associated with age-standardized face recognition scores in both groups, even after controlling for various other potential predictors of performance. In support of the second hypothesis, results suggested that increased saccading between core-features was associated with more accurate face recognition ability, both in typical development and ASD. Causal directions of this relationship remain undetermined.

  14. Visual scan paths and recognition of facial identity in autism spectrum disorder and typical development.

    Directory of Open Access Journals (Sweden)

    C Ellie Wilson

    Full Text Available BACKGROUND: Previous research suggests that many individuals with autism spectrum disorder (ASD have impaired facial identity recognition, and also exhibit abnormal visual scanning of faces. Here, two hypotheses accounting for an association between these observations were tested: i better facial identity recognition is associated with increased gaze time on the Eye region; ii better facial identity recognition is associated with increased eye-movements around the face. METHODOLOGY AND PRINCIPAL FINDINGS: Eye-movements of 11 children with ASD and 11 age-matched typically developing (TD controls were recorded whilst they viewed a series of faces, and then completed a two alternative forced-choice recognition memory test for the faces. Scores on the memory task were standardized according to age. In both groups, there was no evidence of an association between the proportion of time spent looking at the Eye region of faces and age-standardized recognition performance, thus the first hypothesis was rejected. However, the 'Dynamic Scanning Index'--which was incremented each time the participant saccaded into and out of one of the core-feature interest areas--was strongly associated with age-standardized face recognition scores in both groups, even after controlling for various other potential predictors of performance. CONCLUSIONS AND SIGNIFICANCE: In support of the second hypothesis, results suggested that increased saccading between core-features was associated with more accurate face recognition ability, both in typical development and ASD. Causal directions of this relationship remain undetermined.

  15. Reaction Time of Facial Affect Recognition in Asperger's Disorder for Cartoon and Real, Static and Moving Faces

    Science.gov (United States)

    Miyahara, Motohide; Bray, Anne; Tsujii, Masatsugu; Fujita, Chikako; Sugiyama, Toshiro

    2007-01-01

    This study used a choice reaction-time paradigm to test the perceived impairment of facial affect recognition in Asperger's disorder. Twenty teenagers with Asperger's disorder and 20 controls were compared with respect to the latency and accuracy of response to happy or disgusted facial expressions, presented in cartoon or real images and in…

  16. Reaction Time of Facial Affect Recognition in Asperger's Disorder for Cartoon and Real, Static and Moving Faces

    Science.gov (United States)

    Miyahara, Motohide; Bray, Anne; Tsujii, Masatsugu; Fujita, Chikako; Sugiyama, Toshiro

    2007-01-01

    This study used a choice reaction-time paradigm to test the perceived impairment of facial affect recognition in Asperger's disorder. Twenty teenagers with Asperger's disorder and 20 controls were compared with respect to the latency and accuracy of response to happy or disgusted facial expressions, presented in cartoon or real images and in…

  17. Featural processing in recognition of emotional facial expressions.

    Science.gov (United States)

    Beaudry, Olivia; Roy-Charland, Annie; Perron, Melanie; Cormier, Isabelle; Tapp, Roxane

    2014-04-01

    The present study aimed to clarify the role played by the eye/brow and mouth areas in the recognition of the six basic emotions. In Experiment 1, accuracy was examined while participants viewed partial and full facial expressions; in Experiment 2, participants viewed full facial expressions while their eye movements were recorded. Recognition rates were consistent with previous research: happiness was highest and fear was lowest. The mouth and eye/brow areas were not equally important for the recognition of all emotions. More precisely, while the mouth was revealed to be important in the recognition of happiness and the eye/brow area of sadness, results are not as consistent for the other emotions. In Experiment 2, consistent with previous studies, the eyes/brows were fixated for longer periods than the mouth for all emotions. Again, variations occurred as a function of the emotions, the mouth having an important role in happiness and the eyes/brows in sadness. The general pattern of results for the other four emotions was inconsistent between the experiments as well as across different measures. The complexity of the results suggests that the recognition process of emotional facial expressions cannot be reduced to a simple feature processing or holistic processing for all emotions.

  18. Age, gender and puberty influence the development of facial emotion recognition

    OpenAIRE

    Lawrence, Kate; Campbell, Ruth; Skuse, David

    2015-01-01

    Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children’s ability to recognize simple emotions is modulated by chronological age, pubertal stage and gender. In or...

  19. Efficient Web-based Facial Recognition System Employing 2DHOG

    CERN Document Server

    Abdelwahab, Moataz M; Yousry, Islam

    2012-01-01

    In this paper, a system for facial recognition to identify missing and found people in Hajj and Umrah is described as a web portal. Explicitly, we present a novel algorithm for recognition and classifications of facial images based on applying 2DPCA to a 2D representation of the Histogram of oriented gradients (2D-HOG) which maintains the spatial relation between pixels of the input images. This algorithm allows a compact representation of the images which reduces the computational complexity and the storage requirments, while maintaining the highest reported recognition accuracy. This promotes this method for usage with very large datasets. Large dataset was collected for people in Hajj. Experimental results employing ORL, UMIST, JAFFE, and HAJJ datasets confirm these excellent properties.

  20. Younger and Older Users’ Recognition of Virtual Agent Facial Expressions

    Science.gov (United States)

    Beer, Jenay M.; Smarr, Cory-Ann; Fisk, Arthur D.; Rogers, Wendy A.

    2015-01-01

    As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent’s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell, Sullivan, Prevost, & Churchill, 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck & Reichenbach, 2005; Courgeon et al. 2009; 2011; Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent’s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a

  1. Facial expression recognition in Alzheimer's disease: a longitudinal study.

    Science.gov (United States)

    Torres, Bianca; Santos, Raquel Luiza; Sousa, Maria Fernanda Barroso de; Simões Neto, José Pedro; Nogueira, Marcela Moreira Lima; Belfort, Tatiana T; Dias, Rachel; Dourado, Marcia Cristina Nascimento

    2015-05-01

    Facial recognition is one of the most important aspects of social cognition. In this study, we investigate the patterns of change and the factors involved in the ability to recognize emotion in mild Alzheimer's disease (AD). Through a longitudinal design, we assessed 30 people with AD. We used an experimental task that includes matching expressions with picture stimuli, labelling emotions and emotionally recognizing a stimulus situation. We observed a significant difference in the situational recognition task (p ≤ 0.05) between baseline and the second evaluation. The linear regression showed that cognition is a predictor of emotion recognition impairment (p ≤ 0.05). The ability to perceive emotions from facial expressions was impaired, particularly when the emotions presented were relatively subtle. Cognition is recruited to comprehend emotional situations in cases of mild dementia.

  2. Facial Expression Recognition Teaching to Preschoolers with Autism

    DEFF Research Database (Denmark)

    Christinaki, Eirini; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2013-01-01

    for teaching emotion recognition from facial expressions should occur as early as possible in order to be successful and to have a positive effect. It is claimed that Serious Games can be very effective in the areas of therapy and education for children with autism. However, those computer interventions...... require considerable skills for interaction. Before the age of 6, most children with autism do not have such basic motor skills in order to manipulate a mouse or a keyboard. Our approach takes account of the specific characteristics of preschoolers with autism and their physical inabilities. By creating......The recognition of facial expressions is important for the perception of emotions. Understanding emotions is essential in human communication and social interaction. Children with autism have been reported to exhibit deficits in the recognition of affective expressions. Their difficulties...

  3. Attention to Social Stimuli and Facial Identity Recognition Skills in Autism Spectrum Disorder

    Science.gov (United States)

    Wilson, C. E.; Brock, J.; Palermo, R.

    2010-01-01

    Background: Previous research suggests that individuals with autism spectrum disorder (ASD) have a reduced preference for viewing social stimuli in the environment and impaired facial identity recognition. Methods: Here, we directly tested a link between these two phenomena in 13 ASD children and 13 age-matched typically developing (TD) controls.…

  4. Attention to Social Stimuli and Facial Identity Recognition Skills in Autism Spectrum Disorder

    Science.gov (United States)

    Wilson, C. E.; Brock, J.; Palermo, R.

    2010-01-01

    Background: Previous research suggests that individuals with autism spectrum disorder (ASD) have a reduced preference for viewing social stimuli in the environment and impaired facial identity recognition. Methods: Here, we directly tested a link between these two phenomena in 13 ASD children and 13 age-matched typically developing (TD) controls.…

  5. Holistic face processing can inhibit recognition of forensic facial composites.

    Science.gov (United States)

    McIntyre, Alex H; Hancock, Peter J B; Frowd, Charlie D; Langton, Stephen R H

    2016-04-01

    Facial composite systems help eyewitnesses to show the appearance of criminals. However, likenesses created by unfamiliar witnesses will not be completely accurate, and people familiar with the target can find them difficult to identify. Faces are processed holistically; we explore whether this impairs identification of inaccurate composite images and whether recognition can be improved. In Experiment 1 (n = 64) an imaging technique was used to make composites of celebrity faces more accurate and identification was contrasted with the original composite images. Corrected composites were better recognized, confirming that errors in production of the likenesses impair identification. The influence of holistic face processing was explored by misaligning the top and bottom parts of the composites (cf. Young, Hellawell, & Hay, 1987). Misalignment impaired recognition of corrected composites but identification of the original, inaccurate composites significantly improved. This effect was replicated with facial composites of noncelebrities in Experiment 2 (n = 57). We conclude that, like real faces, facial composites are processed holistically: recognition is impaired because unlike real faces, composites contain inaccuracies and holistic face processing makes it difficult to perceive identifiable features. This effect was consistent across composites of celebrities and composites of people who are personally familiar. Our findings suggest that identification of forensic facial composites can be enhanced by presenting composites in a misaligned format.

  6. [Neurobiological basis of human recognition of facial emotion].

    Science.gov (United States)

    Mikhaĭlova, E S

    2005-01-01

    In the review of modern data and ideas concerning the neurophysiological mechanisms and morphological foundations of the most essential communicative function of humans and monkeys, that of recognition of faces and their emotional expressions, the attention is focussed on its dynamic realization and structural provision. On the basis of literature data about hemodynamic and metabolic mapping of the brain the author analyses the role of different zones of the ventral and dorsal visual cortical pathway, the frontal neocortex and amigdala in the facial features processing, as well as the specificity of this processing at each level. Special attention is given to the module principle of the facial processing in the temporal cortex. The dynamic characteristics of facial recognition are discussed according to the electrical evoked response data in healthy and disease humans and monkeys. Modern evidences on the role of different brain structures in the generation of successive evoked response waves in connection with successive stages of facial processing are analyzed. The similarity and differences between mechanisms of recognition of faces and their emotional expression are also considered.

  7. Facial Expression Recognition Based on WAPA and OEPA Fastica

    Directory of Open Access Journals (Sweden)

    Humayra Binte Ali

    2014-06-01

    Full Text Available Face is one of the most important biometric traits for its uniqueness and robustness. For this reason researchers from many diversified fields, like: security, psychology, image processing, and computer vision, started to do research on face detection as well as facial expression recognition. Subspace learning methods work very good for recognizing same facial features. Among subspace learning techniques PCA, ICA, NMF are the most prominent topics. In this work, our main focus is on Independent Component Analysis (ICA. Among several architectures of ICA,we used here FastICA and LS-ICA algorithm. We applied Fast-ICA on whole faces and on different facial parts to analyze the influence of different parts for basic facial expressions. Our extended algorithm WAPA-FastICA and OEPA-FastICA are discussed in proposed algorithm section. Locally Salient ICA is implemented on whole face by using 8x8 windows to find the more prominent facial features for facial expression. The experiment shows our proposed OEPA-FastICA and WAPA-FastICA outperform the existing prevalent Whole-FastICA and LS-ICA methods.

  8. Dynamic Approaches for Facial Recognition Using Digital Image Speckle Correlation

    Science.gov (United States)

    Rafailovich-Sokolov, Sara; Guan, E.; Afriat, Isablle; Rafailovich, Miriam; Sokolov, Jonathan; Clark, Richard

    2004-03-01

    Digital image analysis techniques have been extensively used in facial recognition. To date, most static facial characterization techniques, which are usually based on Fourier transform techniques, are sensitive to lighting, shadows, or modification of appearance by makeup, natural aging or surgery. In this study we have demonstrated that it is possible to uniquely identify faces by analyzing the natural motion of facial features with Digital Image Speckle Correlation (DISC). Human skin has a natural pattern produced by the texture of the skin pores, which is easily visible with conventional digital cameras of resolution greater than 4 mega pixels. Hence the application of the DISC method to the analysis of facial motion appears to be very straightforward. Here we demonstrate that the vector diagrams produced by this method for facial images are directly correlated to the underlying muscle structure which is unique for an individual and is not affected by lighting or make-up. Furthermore, we will show that this method can also be used for medical diagnosis in early detection of facial paralysis and other forms of skin disorders.

  9. Brain correlates of musical and facial emotion recognition: evidence from the dementias.

    Science.gov (United States)

    Hsieh, S; Hornberger, M; Piguet, O; Hodges, J R

    2012-07-01

    The recognition of facial expressions of emotion is impaired in semantic dementia (SD) and is associated with right-sided brain atrophy in areas known to be involved in emotion processing, notably the amygdala. Whether patients with SD also experience difficulty recognizing emotions conveyed by other media, such as music, is unclear. Prior studies have used excerpts of known music from classical or film repertoire but not unfamiliar melodies designed to convey distinct emotions. Patients with SD (n = 11), Alzheimer's disease (n = 12) and healthy control participants (n = 20) underwent tests of emotion recognition in two modalities: unfamiliar musical tunes and unknown faces as well as volumetric MRI. Patients with SD were most impaired with the recognition of facial and musical emotions, particularly for negative emotions. Voxel-based morphometry showed that the labelling of emotions, regardless of modality, correlated with the degree of atrophy in the right temporal pole, amygdala and insula. The recognition of musical (but not facial) emotions was also associated with atrophy of the left anterior and inferior temporal lobe, which overlapped with regions correlating with standardized measures of verbal semantic memory. These findings highlight the common neural substrates supporting the processing of emotions by facial and musical stimuli but also indicate that the recognition of emotions from music draws upon brain regions that are associated with semantics in language. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Age, gender, and puberty influence the development of facial emotion recognition.

    Science.gov (United States)

    Lawrence, Kate; Campbell, Ruth; Skuse, David

    2015-01-01

    Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children's ability to recognize simple emotions is modulated by chronological age, pubertal stage and gender. In order to establish norms, we assessed 478 children aged 6-16 years, using the Ekman-Friesen Pictures of Facial Affect. We then modeled these cross-sectional data in terms of competence in accurate recognition of the six emotions studied, when the positive correlation between emotion recognition and IQ was controlled. Significant linear trends were seen in children's ability to recognize facial expressions of happiness, surprise, fear, and disgust; there was improvement with increasing age. In contrast, for sad and angry expressions there is little or no change in accuracy over the age range 6-16 years; near-adult levels of competence are established by middle-childhood. In a sampled subset, pubertal status influenced the ability to recognize facial expressions of disgust and anger; there was an increase in competence from mid to late puberty, which occurred independently of age. A small female advantage was found in the recognition of some facial expressions. The normative data provided in this study will aid clinicians and researchers in assessing the emotion recognition abilities of children and will facilitate the identification of abnormalities in a skill that is often impaired in neurodevelopmental disorders. If emotion recognition abilities are a good model with which to understand adolescent development, then these results could have implications for the education, mental health provision and legal treatment of teenagers.

  11. Age, gender and puberty influence the development of facial emotion recognition

    Directory of Open Access Journals (Sweden)

    Kate eLawrence

    2015-06-01

    Full Text Available Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children’s ability to recognise simple emotions is modulated by chronological age, pubertal stage and gender. In order to establish norms, we assessed 478 children aged 6-16 years, using the Ekman-Friesen Pictures of Facial Affect. We then modelled these cross-sectional data in terms of competence in accurate recognition of the six emotions studied, when the positive correlation between emotion recognition and IQ was controlled. Significant linear trends were seen in children’s ability to recognise facial expressions of happiness, surprise, fear and disgust; there was improvement with increasing age. In contrast, for sad and angry expressions there is little or no change in accuracy over the age range 6-16 years; near-adult levels of competence are established by middle-childhood. In a sampled subset, pubertal status influenced the ability to recognize facial expressions of disgust and anger; there was an increase in competence from mid to late puberty, which occurred independently of age. A small female advantage was found in the recognition of some facial expressions. The normative data provided in this study will aid clinicians and researchers in assessing the emotion recognition abilities of children and will facilitate the identification of abnormalities in a skill that is often impaired in neurodevelopmental disorders. If emotion recognition abilities are a good model with which to understand adolescent development, then these results could have implications for the education, mental health provision and legal treatment of teenagers.

  12. Age, gender, and puberty influence the development of facial emotion recognition

    Science.gov (United States)

    Lawrence, Kate; Campbell, Ruth; Skuse, David

    2015-01-01

    Our ability to differentiate between simple facial expressions of emotion develops between infancy and early adulthood, yet few studies have explored the developmental trajectory of emotion recognition using a single methodology across a wide age-range. We investigated the development of emotion recognition abilities through childhood and adolescence, testing the hypothesis that children’s ability to recognize simple emotions is modulated by chronological age, pubertal stage and gender. In order to establish norms, we assessed 478 children aged 6–16 years, using the Ekman-Friesen Pictures of Facial Affect. We then modeled these cross-sectional data in terms of competence in accurate recognition of the six emotions studied, when the positive correlation between emotion recognition and IQ was controlled. Significant linear trends were seen in children’s ability to recognize facial expressions of happiness, surprise, fear, and disgust; there was improvement with increasing age. In contrast, for sad and angry expressions there is little or no change in accuracy over the age range 6–16 years; near-adult levels of competence are established by middle-childhood. In a sampled subset, pubertal status influenced the ability to recognize facial expressions of disgust and anger; there was an increase in competence from mid to late puberty, which occurred independently of age. A small female advantage was found in the recognition of some facial expressions. The normative data provided in this study will aid clinicians and researchers in assessing the emotion recognition abilities of children and will facilitate the identification of abnormalities in a skill that is often impaired in neurodevelopmental disorders. If emotion recognition abilities are a good model with which to understand adolescent development, then these results could have implications for the education, mental health provision and legal treatment of teenagers. PMID:26136697

  13. Recognition of children on age-different images: Facial morphology and age-stable features.

    Science.gov (United States)

    Caplova, Zuzana; Compassi, Valentina; Giancola, Silvio; Gibelli, Daniele M; Obertová, Zuzana; Poppa, Pasquale; Sala, Remo; Sforza, Chiarella; Cattaneo, Cristina

    2017-07-01

    The situation of missing children is one of the most emotional social issues worldwide. The search for and identification of missing children is often hampered, among others, by the fact that the facial morphology of long-term missing children changes as they grow. Nowadays, the wide coverage by surveillance systems potentially provides image material for comparisons with images of missing children that may facilitate identification. The aim of study was to identify whether facial features are stable in time and can be utilized for facial recognition by comparing facial images of children at different ages as well as to test the possible use of moles in recognition. The study was divided into two phases (1) morphological classification of facial features using an Anthropological Atlas; (2) algorithm developed in MATLAB® R2014b for assessing the use of moles as age-stable features. The assessment of facial features by Anthropological Atlases showed high mismatch percentages among observers. On average, the mismatch percentages were lower for features describing shape than for those describing size. The nose tip cleft and the chin dimple showed the best agreement between observers regarding both categorization and stability over time. Using the position of moles as a reference point for recognition of the same person on age-different images seems to be a useful method in terms of objectivity and it can be concluded that moles represent age-stable facial features that may be considered for preliminary recognition. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  14. Facial Expression Recognition Using 3D Convolutional Neural Network

    Directory of Open Access Journals (Sweden)

    Young-Hyen Byeon

    2014-12-01

    Full Text Available This paper is concerned with video-based facial expression recognition frequently used in conjunction with HRI (Human-Robot Interaction that can naturally interact between human and robot. For this purpose, we design a 3D-CNN(3D Convolutional Neural Networks by augmenting dimensionality reduction methods such as PCA(Principal Component Analysis and TMPCA(Tensor-based Multilinear Principal Component Analysis to recognize simultaneously the successive frames with facial expression images obtained through video camera. The 3D-CNN can achieve some degree of shift and deformation invariance using local receptive fields and spatial subsampling through dimensionality reduction of redundant CNN’s output. The experimental results on video-based facial expression database reveal that the presented method shows a good performance in comparison to the conventional methods such as PCA and TMPCA.

  15. Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Cheng-Yuan Shih

    2010-01-01

    Full Text Available This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA and quadratic discriminant analysis (QDA. It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.

  16. Detecting subtle facial emotion recognition deficits in high-functioning Autism using dynamic stimuli of varying intensities.

    Science.gov (United States)

    Law Smith, Miriam J; Montagne, Barbara; Perrett, David I; Gill, Michael; Gallagher, Louise

    2010-07-01

    Autism Spectrum Disorders (ASD) are characterised by social and communication impairment, yet evidence for deficits in the ability to recognise facial expressions of basic emotions is conflicting. Many studies reporting no deficits have used stimuli that may be too simple (with associated ceiling effects), for example, 100% 'full-blown' expressions. In order to investigate subtle deficits in facial emotion recognition, 21 adolescent males with high-functioning Austism Spectrum Disorders (ASD) and 16 age and IQ matched typically developing control males completed a new sensitive test of facial emotion recognition which uses dynamic stimuli of varying intensities of expressions of the six basic emotions (Emotion Recognition Test; Montagne et al., 2007). Participants with ASD were found to be less accurate at processing the basic emotional expressions of disgust, anger and surprise; disgust recognition was most impaired--at 100% intensity and lower levels, whereas recognition of surprise and anger were intact at 100% but impaired at lower levels of intensity.

  17. The Reliability of Facial Recognition of Deceased Persons on Photographs.

    Science.gov (United States)

    Caplova, Zuzana; Obertova, Zuzana; Gibelli, Daniele M; Mazzarelli, Debora; Fracasso, Tony; Vanezis, Peter; Sforza, Chiarella; Cattaneo, Cristina

    2017-09-01

    In humanitarian emergencies, such as the current deceased migrants in the Mediterranean, antemortem documentation needed for identification may be limited. The use of visual identification has been previously reported in cases of mass disasters such as Thai tsunami. This pilot study explores the ability of observers to match unfamiliar faces of living and dead persons and whether facial morphology can be used for identification. A questionnaire was given to 41 students and five professionals in the field of forensic identification with the task to choose whether a facial photograph corresponds to one of the five photographs in a lineup and to identify the most useful features used for recognition. Although the overall recognition score did not significantly differ between professionals and students, the median scores of 78.1% and 80.0%, respectively, were too low to consider this method as a reliable identification method and thus needs to be supported by other means. © 2017 American Academy of Forensic Sciences.

  18. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    Science.gov (United States)

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain.

  19. Developmental differences in holistic interference of facial part recognition.

    Directory of Open Access Journals (Sweden)

    Kazuyo Nakabayashi

    Full Text Available Research has shown that adults' recognition of a facial part can be disrupted if the part is learnt without a face context but tested in a whole face. This has been interpreted as the holistic interference effect. The present study investigated whether children of 6- and 9-10-year-olds would show a similar effect. Participants were asked to judge whether a probe part was the same as or different from a test part whereby the part was presented either in isolation or in a whole face. The results showed that while all the groups were susceptible to a holistic interference, the youngest group was most severely affected. Contrary to the view that piecemeal processing precedes holistic processing in the cognitive development, our findings demonstrate that holistic processing is already present at 6 years of age. It is the ability to inhibit the influence of holistic information on piecemeal processing that seems to require a longer period of development into at an older and adult age.

  20. Developmental Differences in Holistic Interference of Facial Part Recognition

    Science.gov (United States)

    Nakabayashi, Kazuyo; Liu, Chang Hong

    2013-01-01

    Research has shown that adults’ recognition of a facial part can be disrupted if the part is learnt without a face context but tested in a whole face. This has been interpreted as the holistic interference effect. The present study investigated whether children of 6- and 9–10-year-olds would show a similar effect. Participants were asked to judge whether a probe part was the same as or different from a test part whereby the part was presented either in isolation or in a whole face. The results showed that while all the groups were susceptible to a holistic interference, the youngest group was most severely affected. Contrary to the view that piecemeal processing precedes holistic processing in the cognitive development, our findings demonstrate that holistic processing is already present at 6 years of age. It is the ability to inhibit the influence of holistic information on piecemeal processing that seems to require a longer period of development into at an older and adult age. PMID:24204847

  1. Facial expression recognition based on improved local ternary pattern and stacked auto-encoder

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.

  2. Facial Expression Recognition Deficits and Faulty Learning: Implications for Theoretical Models and Clinical Applications

    Science.gov (United States)

    Sheaffer, Beverly L.; Golden, Jeannie A.; Averett, Paige

    2009-01-01

    The ability to recognize facial expressions of emotion is integral in social interaction. Although the importance of facial expression recognition is reflected in increased research interest as well as in popular culture, clinicians may know little about this topic. The purpose of this article is to discuss facial expression recognition literature…

  3. Misreading the facial signs: specific impairments and error patterns in recognition of facial emotions with negative valence in borderline personality disorder.

    Science.gov (United States)

    Unoka, Zsolt; Fogd, Dóra; Füzy, Melinda; Csukly, Gábor

    2011-10-30

    Patients with borderline personality disorder (BPD) exhibit impairment in labeling of facial emotional expressions. However, it is not clear whether these deficits affect the whole domain of basic emotions, are valence-specific, or specific to individual emotions. Whether BPD patients' errors in a facial emotion recognition task create a specific pattern also remains to be elucidated. Our study tested two hypotheses: first, we hypothesized, that the emotion perception impairment in borderline personality disorder is specific to the negative emotion domain. Second, we hypothesized, that BPD patients would show error patterns in a facial emotion recognition task more commonly and more systematically than healthy comparison subjects. Participants comprised 33 inpatients with BPD and 32 matched healthy control subjects who performed a computerized version of the Ekman 60 Faces test. The indices of emotion recognition and the direction of errors were processed in separate analyses. Clinical symptoms and personality functioning were assessed using the Symptom Checklist-90-Revised and the Young Schema Questionnaire Long Form. Results showed that patients with BPD were less accurate than control participants in emotion recognition, in particular, in the discrimination of negative emotions, while they were not impaired in the recognition of happy facial expressions. In addition, patients over-attributed disgust and surprise and under-attributed fear to the facial expressions relative to controls. These findings suggest the importance of carefully considering error patterns, besides measuring recognition accuracy, especially among emotions with negative affective valence, when assessing facial affect recognition in BPD. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  4. Detecting facial emotion recognition deficits in schizophrenia using dynamic stimuli of varying intensities.

    Science.gov (United States)

    Hargreaves, A; Mothersill, O; Anderson, M; Lawless, S; Corvin, A; Donohoe, G

    2016-10-28

    Deficits in facial emotion recognition have been associated with functional impairments in patients with Schizophrenia (SZ). Whilst a strong ecological argument has been made for the use of both dynamic facial expressions and varied emotion intensities in research, SZ emotion recognition studies to date have primarily used static stimuli of a singular, 100%, intensity of emotion. To address this issue, the present study aimed to investigate accuracy of emotion recognition amongst patients with SZ and healthy subjects using dynamic facial emotion stimuli of varying intensities. To this end an emotion recognition task (ERT) designed by Montagne (2007) was adapted and employed. 47 patients with a DSM-IV diagnosis of SZ and 51 healthy participants were assessed for emotion recognition. Results of the ERT were tested for correlation with performance in areas of cognitive ability typically found to be impaired in psychosis, including IQ, memory, attention and social cognition. Patients were found to perform less well than healthy participants at recognising each of the 6 emotions analysed. Surprisingly, however, groups did not differ in terms of impact of emotion intensity on recognition accuracy; for both groups higher intensity levels predicted greater accuracy, but no significant interaction between diagnosis and emotional intensity was found for any of the 6 emotions. Accuracy of emotion recognition was, however, more strongly correlated with cognition in the patient cohort. Whilst this study demonstrates the feasibility of using ecologically valid dynamic stimuli in the study of emotion recognition accuracy, varying the intensity of the emotion displayed was not demonstrated to impact patients and healthy participants differentially, and thus may not be a necessary variable to include in emotion recognition research. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Privacy in the Face of Surveillance: Fourth Amendment Considerations for Facial Recognition Technology

    Science.gov (United States)

    2015-03-01

    five decades later, are: “head rotation and tilt, lighting intensity and angle, facial expression , aging, etc.”51 47 Charlie Savage, “ Facial ...OF SURVEILLANCE: FOURTH AMENDMENT CONSIDERATIONS FOR FACIAL RECOGNITION TECHNOLOGY by Eric Z. Wynn March 2015 Thesis Advisor: Carolyn... FACIAL RECOGNITION TECHNOLOGY 6. AUTHOR(S) Eric Z. Wynn 7. PERFORMING ORGANIZATION NA:i\\tiE(S) AND ADDRESS(ES) Naval Postgraduate School Monterey

  6. Disrupting pre-SMA activity impairs facial happiness recognition: an event-related TMS study.

    Science.gov (United States)

    Rochas, Vincent; Gelmini, Lauriane; Krolak-Salmon, Pierre; Poulet, Emmanuel; Saoud, Mohamed; Brunelin, Jerome; Bediou, Benoit

    2013-07-01

    It has been suggested that the left pre-supplementary motor area (pre-SMA) could be implicated in facial emotion expression and recognition, especially for laughter/happiness. To test this hypothesis, in a single-blind, randomized crossover study, we investigated the impact of transcranial magnetic stimulation (TMS) on performances of 18 healthy participants during a facial emotion recognition task. Using a neuronavigation system based on T1-weighted magnetic resonance imaging of each participant, TMS (5 pulses, 10 Hz) was delivered over the pre-SMA or the vertex (control condition) in an event-related fashion after the presentation of happy, fear, and angry faces. Compared with performances during vertex stimulation, we observed that TMS applied over the left pre-SMA specifically disrupted facial happiness recognition (FHR). No difference was observed between the 2 conditions neither for fear and anger recognition nor for reaction times (RT). Thus, interfering with pre-SMA activity with event-related TMS after stimulus presentation produced a selective impairment in the recognition of happy faces. These findings provide new insights into the functional implication of the pre-SMA in FHR, which may rely on the mirror properties of pre-SMA neurons.

  7. Pain Recognition using Spatiotemporal Oriented Energy of Facial Muscles

    DEFF Research Database (Denmark)

    Irani, Ramin; Nasrollahi, Kamal; Moeslund, Thomas B.

    2015-01-01

    Pain is a critical sign in many medical situations and its automatic detection and recognition using computer vision techniques is of great importance. Utilizes this fact that pain is a spatiotemporal process, the proposed system in this paper employs steerable and separable filters to measures e...... energies released by the facial muscles during the pain process. The proposed system not only detects the pain but recognizes its level. Experimental results on the publicly available pain database of UNBC show promising outcome for automatic pain detection and recognition.......Pain is a critical sign in many medical situations and its automatic detection and recognition using computer vision techniques is of great importance. Utilizes this fact that pain is a spatiotemporal process, the proposed system in this paper employs steerable and separable filters to measures...

  8. Facial Emotion Recognition and Expression in Parkinson's Disease: An Emotional Mirror Mechanism?

    Science.gov (United States)

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J; Kilner, James

    2017-01-01

    Parkinson's disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. For emotion recognition, PD reported lower score than HC for Ekman total score (pemotions sub-scores happiness, fear, anger, sadness (pemotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all pemotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). PD patients showed difficulties in recognizing emotional facial

  9. Facial Emotion Recognition and Expression in Parkinson’s Disease: An Emotional Mirror Mechanism?

    Science.gov (United States)

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J.; Kilner, James

    2017-01-01

    Background and aim Parkinson’s disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Methods Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. Results For emotion recognition, PD reported lower score than HC for Ekman total score (pemotions sub-scores happiness, fear, anger, sadness (pemotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all pemotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). Conclusions PD patients

  10. Infrared facial recognition technology being pushed toward emerging applications

    Science.gov (United States)

    Evans, David C.

    1997-02-01

    Human identification is a two-step process of initial identity assignment and later verification or recognition. The positive identification requirement is a major part of the classic security, legal, banking, and police task of granting or denying access to a facility, authority to take an action or, in police work, to identify or verify the identity of an individual. To meet this requirement, a three-part research and development (R&D) effort was undertaken Betac International Corporation, through its subsidiaries of Betac Corporation and Technology Recognition Systems, to develop an automated access control system using infrared (IR) facial images to verify the identity of an individual in real time. The system integrates IR facial imaging and a computer-based matching algorithm to perform the human recognition task rapidly, accurately, and nonintrusively, based on three basic principles: every human IR facial image (or thermogram) is unique to that individual; an IR camera can be used to capture human thermograms; and captured thermograms can be digitized, stored, and matched using a computer and mathematical algorithms. The first part of the development effort, an operator-assisted IR image matching proof-of-concept demonstration, was successfully completed in the spring of 1994. The second part of the R&D program, the design and evaluation of a prototype automated access control unit using the IR image matching technology, was completed in April 1995. This paper describes the final development effort to identify, assess, and evaluate the availability and suitability of robust image matching algorithms capable of supporting and enhancing the use of IR facial recognition technology. The most promising mature and available image matching algorithm was integrated into a demonstration access control unit (ACU) using a state-of-the-art IR imager and a performance evaluation was compared against that of a prototype automated ACU using a less robust algorithm and a

  11. Facial expression influences face identity recognition during the attentional blink.

    Science.gov (United States)

    Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J

    2014-12-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

  12. Facial recognition software success rates for the identification of 3D surface reconstructed facial images: implications for patient privacy and security.

    Science.gov (United States)

    Mazura, Jan C; Juluru, Krishna; Chen, Joseph J; Morgan, Tara A; John, Majnu; Siegel, Eliot L

    2012-06-01

    Image de-identification has focused on the removal of textual protected health information (PHI). Surface reconstructions of the face have the potential to reveal a subject's identity even when textual PHI is absent. This study assessed the ability of a computer application to match research subjects' 3D facial reconstructions with conventional photographs of their face. In a prospective study, 29 subjects underwent CT scans of the head and had frontal digital photographs of their face taken. Facial reconstructions of each CT dataset were generated on a 3D workstation. In phase 1, photographs of the 29 subjects undergoing CT scans were added to a digital directory and tested for recognition using facial recognition software. In phases 2-4, additional photographs were added in groups of 50 to increase the pool of possible matches and the test for recognition was repeated. As an internal control, photographs of all subjects were tested for recognition against an identical photograph. Of 3D reconstructions, 27.5% were matched correctly to corresponding photographs (95% upper CL, 40.1%). All study subject photographs were matched correctly to identical photographs (95% lower CL, 88.6%). Of 3D reconstructions, 96.6% were recognized simply as a face by the software (95% lower CL, 83.5%). Facial recognition software has the potential to recognize features on 3D CT surface reconstructions and match these with photographs, with implications for PHI.

  13. Real Time Facial Expression Recognition Using a Novel Method

    Directory of Open Access Journals (Sweden)

    Saumil Srivastava

    2012-04-01

    Full Text Available This paper discusses a novel method for Facial Expression Recognition System which performs facial expression analysis in a near real time from a live web cam feed. Primary objectives were to get results in a near real time with light invariant, person independent and pose invariant way. The system is composed of two different entities trainer and evaluator. Each frame of video feed is passed through a series of steps including haar classifiers, skin detection, feature extraction, feature points tracking, creating a learned Support Vector Machine model to classify emotions to achieve a tradeoff between accuracy and result rate. A processing time of 100-120 ms per 10 frames was achieved with accuracy of around 60%. We measure our accuracy in terms of variety of interaction and classification scenarios. We conclude by discussing relevance of our work to human computer interaction and exploring further measures that can be taken.

  14. Impaired Facial Expression Recognition in Children with Temporal Lobe Epilepsy: Impact of Early Seizure Onset on Fear Recognition

    Science.gov (United States)

    Golouboff, Nathalie; Fiori, Nicole; Delalande, Olivier; Fohlen, Martine; Dellatolas, Georges; Jambaque, Isabelle

    2008-01-01

    The amygdala has been implicated in the recognition of facial emotions, especially fearful expressions, in adults with early-onset right temporal lobe epilepsy (TLE). The present study investigates the recognition of facial emotions in children and adolescents, 8-16 years old, with epilepsy. Twenty-nine subjects had TLE (13 right, 16 left) and…

  15. Impaired Facial Expression Recognition in Children with Temporal Lobe Epilepsy: Impact of Early Seizure Onset on Fear Recognition

    Science.gov (United States)

    Golouboff, Nathalie; Fiori, Nicole; Delalande, Olivier; Fohlen, Martine; Dellatolas, Georges; Jambaque, Isabelle

    2008-01-01

    The amygdala has been implicated in the recognition of facial emotions, especially fearful expressions, in adults with early-onset right temporal lobe epilepsy (TLE). The present study investigates the recognition of facial emotions in children and adolescents, 8-16 years old, with epilepsy. Twenty-nine subjects had TLE (13 right, 16 left) and…

  16. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.

    Science.gov (United States)

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-06-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development.

  17. Joint recognition-expression impairment of facial emotions in Huntington's disease despite intact understanding of feelings.

    Science.gov (United States)

    Trinkler, Iris; Cleret de Langavant, Laurent; Bachoud-Lévi, Anne-Catherine

    2013-02-01

    Patients with Huntington's disease (HD), a neurodegenerative disorder that causes major motor impairments, also show cognitive and emotional deficits. While their deficit in recognising emotions has been explored in depth, little is known about their ability to express emotions and understand their feelings. If these faculties were impaired, patients might not only mis-read emotion expressions in others but their own emotions might be mis-interpreted by others as well, or thirdly, they might have difficulties understanding and describing their feelings. We compared the performance of recognition and expression of facial emotions in 13 HD patients with mild motor impairments but without significant bucco-facial abnormalities, and 13 controls matched for age and education. Emotion recognition was investigated in a forced-choice recognition test (FCR), and emotion expression by filming participants while they mimed the six basic emotional facial expressions (anger, disgust, fear, surprise, sadness and joy) to the experimenter. The films were then segmented into 60 stimuli per participant and four external raters performed a FCR on this material. Further, we tested understanding of feelings in self (alexithymia) and others (empathy) using questionnaires. Both recognition and expression were impaired across different emotions in HD compared to controls and recognition and expression scores were correlated. By contrast, alexithymia and empathy scores were very similar in HD and controls. This might suggest that emotion deficits in HD might be tied to the expression itself. Because similar emotion recognition-expression deficits are also found in Parkinson's Disease and vascular lesions of the striatum, our results further confirm the importance of the striatum for emotion recognition and expression, while access to the meaning of feelings relies on a different brain network, and is spared in HD.

  18. Effects of exposure to facial expression variation in face learning and recognition.

    Science.gov (United States)

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.

  19. Recognition of facial emotion and affective prosody in children with ASD (+ADHD) and their unaffected siblings.

    Science.gov (United States)

    Oerlemans, Anoek M; van der Meer, Jolanda M J; van Steijn, Daphne J; de Ruiter, Saskia W; de Bruijn, Yvette G E; de Sonneville, Leo M J; Buitelaar, Jan K; Rommelse, Nanda N J

    2014-05-01

    Autism is a highly heritable and clinically heterogeneous neuropsychiatric disorder that frequently co-occurs with other psychopathologies, such as attention-deficit/hyperactivity disorder (ADHD). An approach to parse heterogeneity is by forming more homogeneous subgroups of autism spectrum disorder (ASD) patients based on their underlying, heritable cognitive vulnerabilities (endophenotypes). Emotion recognition is a likely endophenotypic candidate for ASD and possibly for ADHD. Therefore, this study aimed to examine whether emotion recognition is a viable endophenotypic candidate for ASD and to assess the impact of comorbid ADHD in this context. A total of 90 children with ASD (43 with and 47 without ADHD), 79 ASD unaffected siblings, and 139 controls aged 6-13 years, were included to test recognition of facial emotion and affective prosody. Our results revealed that the recognition of both facial emotion and affective prosody was impaired in children with ASD and aggravated by the presence of ADHD. The latter could only be partly explained by typical ADHD cognitive deficits, such as inhibitory and attentional problems. The performance of unaffected siblings could overall be considered at an intermediate level, performing somewhat worse than the controls and better than the ASD probands. Our findings suggest that emotion recognition might be a viable endophenotype in ASD and a fruitful target in future family studies of the genetic contribution to ASD and comorbid ADHD. Furthermore, our results suggest that children with comorbid ASD and ADHD are at highest risk for emotion recognition problems.

  20. Recognition of facial expressions of emotion in panic disorder.

    Science.gov (United States)

    Cai, Liqiang; Chen, Wanzhen; Shen, Yuedi; Wang, Xinling; Wei, Lili; Zhang, Yingchun; Wang, Wei; Chen, Wei

    2012-01-01

    Whether patients with panic disorder behave differently or not when recognizing the facial expressions of emotion remains unsettled. We tested 21 outpatients with panic disorder and 34 healthy subjects, with a photo set from the Matsumoto and Ekman Japanese and Caucasian facial expressions of emotion, which includes anger, contempt, disgust, fear, happiness, sadness, and surprise. Compared to the healthy subjects, patients showed lower accuracies when recognizing disgust and fear, but a higher accuracy when recognizing surprise. These results suggest that the altered specificity to these emotions leads tso self-awareness mechanisms to prevent further emotional reactions in panic disorder patients. Copyright © 2012 S. Karger AG, Basel.

  1. Comparing Facial Emotional Recognition in Patients with Borderline Personality Disorder and Patients with Schizotypal Personality Disorder with a Normal Group

    Directory of Open Access Journals (Sweden)

    Aida Farsham

    2017-04-01

    Full Text Available Objective: No research has been conducted on facial emotional recognition on patients with borderline personality disorder (BPD and schizotypal personality disorder (SPD. The present study aimed at comparing facial emotion recognition in these patients with the general population. The neurocognitive processing of emotions can show the pathologic style of these 2 disorders. Method:  Twenty BPD patients, 16 SPD patients, and 20 healthy individuals were selected by available sampling method. Structural Clinical Interview for Axis II, Millon Personality Inventory, Beck Depression Inventory and Facial Emotional Recognition Test was were conducted for all participants.Discussion: The results of one way ANOVA and Scheffe’s post hoc test analysis revealed significant differences in neuropsychology assessment of  facial emotional recognition between BPD and  SPD patients with normal group (p = 0/001. A significant difference was found in emotion recognition of fear between the 2 groups of BPD and normal population (p = 0/008. A significant difference was observed between SPD patients and control group in emotion recognition of wonder (p = 0/04(.The obtained results indicated a deficit in negative emotion recognition, especially disgust emotion, thus, it can be concluded that these patients have the same neurocognitive profile in the emotion domain.

  2. The relationship between facial emotion recognition and executive functions in first-episode patients with schizophrenia and their siblings.

    Science.gov (United States)

    Yang, Chengqing; Zhang, Tianhong; Li, Zezhi; Heeramun-Aubeeluck, Anisha; Liu, Na; Huang, Nan; Zhang, Jie; He, Leiying; Li, Hui; Tang, Yingying; Chen, Fazhan; Liu, Fei; Wang, Jijun; Lu, Zheng

    2015-10-08

    Although many studies have examined executive functions and facial emotion recognition in people with schizophrenia, few of them focused on the correlation between them. Furthermore, their relationship in the siblings of patients also remains unclear. The aim of the present study is to examine the correlation between executive functions and facial emotion recognition in patients with first-episode schizophrenia and their siblings. Thirty patients with first-episode schizophrenia, their twenty-six siblings, and thirty healthy controls were enrolled. They completed facial emotion recognition tasks using the Ekman Standard Faces Database, and executive functioning was measured by Wisconsin Card Sorting Test (WCST). Hierarchical regression analysis was applied to assess the correlation between executive functions and facial emotion recognition. Our study found that in siblings, the accuracy in recognizing low degree 'disgust' emotion was negatively correlated with the total correct rate in WCST (r = -0.614, p = 0.023), but was positively correlated with the total error in WCST (r = 0.623, p = 0.020); the accuracy in recognizing 'neutral' emotion was positively correlated with the total error rate in WCST (r = 0.683, p = 0.014) while negatively correlated with the total correct rate in WCST (r = -0.677, p = 0.017). People with schizophrenia showed an impairment in facial emotion recognition when identifying moderate 'happy' facial emotion, the accuracy of which was significantly correlated with the number of completed categories of WCST (R(2) = 0.432, P emotion recognition in the healthy control group. Our study demonstrated that facial emotion recognition impairment correlated with executive function impairment in people with schizophrenia and their unaffected siblings but not in healthy controls.

  3. Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions.

    Science.gov (United States)

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio

    2015-01-01

    The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants' tendency to over-attribute anger label to other negative facial expressions. Participants' heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants' performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants' tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children's "pre-existing bias" for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim's perceptive and attentive focus on salient environmental social stimuli.

  4. Visual Scanning Patterns and Executive Function in Relation to Facial Emotion Recognition in Aging

    Science.gov (United States)

    Circelli, Karishma S.; Clark, Uraina S.; Cronin-Golomb, Alice

    2012-01-01

    Objective The ability to perceive facial emotion varies with age. Relative to younger adults (YA), older adults (OA) are less accurate at identifying fear, anger, and sadness, and more accurate at identifying disgust. Because different emotions are conveyed by different parts of the face, changes in visual scanning patterns may account for age-related variability. We investigated the relation between scanning patterns and recognition of facial emotions. Additionally, as frontal-lobe changes with age may affect scanning patterns and emotion recognition, we examined correlations between scanning parameters and performance on executive function tests. Methods We recorded eye movements from 16 OA (mean age 68.9) and 16 YA (mean age 19.2) while they categorized facial expressions and non-face control images (landscapes), and administered standard tests of executive function. Results OA were less accurate than YA at identifying fear (precognition of sad expressions and with scanning patterns for fearful, sad, and surprised expressions. Conclusion We report significant age-related differences in visual scanning that are specific to faces. The observed relation between scanning patterns and executive function supports the hypothesis that frontal-lobe changes with age may underlie some changes in emotion recognition. PMID:22616800

  5. A Modified Sparse Representation Method for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2016-01-01

    Full Text Available In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit method is used to speed up the convergence of OMP (orthogonal matching pursuit. Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan’s JAFFE and CMU’s CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.

  6. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  7. The Differential Effects of Thalamus and Basal Ganglia on Facial Emotion Recognition

    Science.gov (United States)

    Cheung, Crystal C. Y.; Lee, Tatia M. C.; Yip, James T. H.; King, Kristin E.; Li, Leonard S. W.

    2006-01-01

    This study examined if subcortical stroke was associated with impaired facial emotion recognition. Furthermore, the lateralization of the impairment and the differential profiles of facial emotion recognition deficits with localized thalamic or basal ganglia damage were also studied. Thirty-eight patients with subcortical strokes and 19 matched…

  8. Fearful faces in schizophrenia - The relationship between patient characteristics and facial affect recognition

    NARCIS (Netherlands)

    van't Wout, Mascha; van Dijke, Annemiek; Aleman, Andre; Kessels, Roy P. C.; Pijpers, Wietske; Kahn, Rene S.

    2007-01-01

    Although schizophrenia has often been associated with deficits in facial affect recognition, it is debated whether the recognition of specific emotions is affected and if these facial affect-processing deficits are related to symptomatology or other patient characteristics. The purpose of the presen

  9. The Differential Effects of Thalamus and Basal Ganglia on Facial Emotion Recognition

    Science.gov (United States)

    Cheung, Crystal C. Y.; Lee, Tatia M. C.; Yip, James T. H.; King, Kristin E.; Li, Leonard S. W.

    2006-01-01

    This study examined if subcortical stroke was associated with impaired facial emotion recognition. Furthermore, the lateralization of the impairment and the differential profiles of facial emotion recognition deficits with localized thalamic or basal ganglia damage were also studied. Thirty-eight patients with subcortical strokes and 19 matched…

  10. Recognition of static and dynamic facial expressions: Influences of sex, type and intensity of emotion

    OpenAIRE

    2013-01-01

    Ecological validity of static and intense facial expressions in emotional recognition has been questioned. Recent studies have recommended the use of facial stimuli more compatible to the natural conditions of social interaction, which involves motion and variations in emotional intensity. In this study, we compared the recognition of static and dynamic facial expressions of happiness, fear, anger and sadness, presented in four emotional intensities (25 %, 50 %, 75 % and 100 %). Twenty volunt...

  11. Impaired recognition of musical emotions and facial expressions following anteromedial temporal lobe excision.

    Science.gov (United States)

    Gosselin, Nathalie; Peretz, Isabelle; Hasboun, Dominique; Baulac, Michel; Samson, Séverine

    2011-10-01

    We have shown that an anteromedial temporal lobe resection can impair the recognition of scary music in a prior study (Gosselin et al., 2005). In other studies (Adolphs et al., 2001; Anderson et al., 2000), similar results have been obtained with fearful facial expressions. These findings suggest that scary music and fearful faces may be processed by common cerebral structures. To assess this possibility, we tested patients with unilateral anteromedial temporal excision and normal controls in two emotional tasks. In the task of identifying musical emotion, stimuli evoked either fear, peacefulness, happiness or sadness. Participants were asked to rate to what extent each stimulus expressed these four emotions on 10-point scales. The task of facial emotion included morphed stimuli whose expression varied from faint to more pronounced and evoked fear, happiness, sadness, surprise, anger or disgust. Participants were requested to select the appropriate label. Most patients were found to be impaired in the recognition of both scary music and fearful faces. Furthermore, the results in both tasks were correlated, suggesting a multimodal representation of fear within the amygdala. However, inspection of individual results showed that recognition of fearful faces can be preserved whereas recognition of scary music can be impaired. Such a dissociation found in two cases suggests that fear recognition in faces and in music does not necessarily involve exactly the same cerebral networks and this hypothesis is discussed in light of the current literature.

  12. Neuroanatomical correlates of impaired decision-making and facial emotion recognition in early Parkinson's disease.

    Science.gov (United States)

    Ibarretxe-Bilbao, Naroa; Junque, Carme; Tolosa, Eduardo; Marti, Maria-Jose; Valldeoriola, Francesc; Bargallo, Nuria; Zarei, Mojtaba

    2009-09-01

    Decision-making and recognition of emotions are often impaired in patients with Parkinson's disease (PD). The orbitofrontal cortex (OFC) and the amygdala are critical structures subserving these functions. This study was designed to test whether there are any structural changes in these areas that might explain the impairment of decision-making and recognition of facial emotions in early PD. We used the Iowa Gambling Task (IGT) and the Ekman 60 faces test which are sensitive to the integrity of OFC and amygdala dysfunctions in 24 early PD patients and 24 controls. High-resolution structural magnetic resonance images (MRI) were also obtained. Group analysis using voxel-based morphometry (VBM) showed significant and corrected (P Ekman test performance in PD patients. We conclude that: (i) impairment in decision-making and recognition of facial emotions occurs at the early stages of PD, (ii) these neuropsychological deficits are accompanied by degeneration of OFC and amygdala, and (iii) bilateral OFC reductions are associated with impaired recognition of emotions, and GM volume loss in left lateral OFC is related to decision-making impairment in PD.

  13. The effects of acute alcohol intoxication on the cognitive mechanisms underlying false facial recognition.

    Science.gov (United States)

    Colloff, Melissa F; Flowe, Heather D

    2016-06-01

    False face recognition rates are sometimes higher when faces are learned while under the influence of alcohol. Alcohol myopia theory (AMT) proposes that acute alcohol intoxication during face learning causes people to attend to only the most salient features of a face, impairing the encoding of less salient facial features. Yet, there is currently no direct evidence to support this claim. Our objective was to test whether acute alcohol intoxication impairs face learning by causing subjects to attend to a salient (i.e., distinctive) facial feature over other facial features, as per AMT. We employed a balanced placebo design (N = 100). Subjects in the alcohol group were dosed to achieve a blood alcohol concentration (BAC) of 0.06 %, whereas the no alcohol group consumed tonic water. Alcohol expectancy was controlled. Subjects studied faces with or without a distinctive feature (e.g., scar, piercing). An old-new recognition test followed. Some of the test faces were "old" (i.e., previously studied), and some were "new" (i.e., not previously studied). We varied whether the new test faces had a previously studied distinctive feature versus other familiar characteristics. Intoxicated and sober recognition accuracy was comparable, but subjects in the alcohol group made more positive identifications overall compared to the no alcohol group. The results are not in keeping with AMT. Rather, a more general cognitive mechanism appears to underlie false face recognition in intoxicated subjects. Specifically, acute alcohol intoxication during face learning results in more liberal choosing, perhaps because of an increased reliance on familiarity.

  14. Neurobiological mechanisms associated with facial affect recognition deficits after traumatic brain injury.

    Science.gov (United States)

    Neumann, Dawn; McDonald, Brenna C; West, John; Keiski, Michelle A; Wang, Yang

    2016-06-01

    The neurobiological mechanisms that underlie facial affect recognition deficits after traumatic brain injury (TBI) have not yet been identified. Using functional magnetic resonance imaging (fMRI), study aims were to 1) determine if there are differences in brain activation during facial affect processing in people with TBI who have facial affect recognition impairments (TBI-I) relative to people with TBI and healthy controls who do not have facial affect recognition impairments (TBI-N and HC, respectively); and 2) identify relationships between neural activity and facial affect recognition performance. A facial affect recognition screening task performed outside the scanner was used to determine group classification; TBI patients who performed greater than one standard deviation below normal performance scores were classified as TBI-I, while TBI patients with normal scores were classified as TBI-N. An fMRI facial recognition paradigm was then performed within the 3T environment. Results from 35 participants are reported (TBI-I = 11, TBI-N = 12, and HC = 12). For the fMRI task, TBI-I and TBI-N groups scored significantly lower than the HC group. Blood oxygenation level-dependent (BOLD) signals for facial affect recognition compared to a baseline condition of viewing a scrambled face, revealed lower neural activation in the right fusiform gyrus (FG) in the TBI-I group than the HC group. Right fusiform gyrus activity correlated with accuracy on the facial affect recognition tasks (both within and outside the scanner). Decreased FG activity suggests facial affect recognition deficits after TBI may be the result of impaired holistic face processing. Future directions and clinical implications are discussed.

  15. Recognition of emotion in facial expression by people with Prader-Willi syndrome.

    Science.gov (United States)

    Whittington, J; Holland, T

    2011-01-01

    People with Prader-Willi syndrome (PWS) may have mild intellectual impairments but less is known about their social cognition. Most parents/carers report that people with PWS do not have normal peer relationships, although some have older or younger friends. Two specific aspects of social cognition are being able to recognise other people's emotion and to then respond appropriately. In a previous study, mothers/carers thought that 26% of children and 23% of adults with PWS would not respond to others' feelings. They also thought that 64% could recognise happiness, sadness, anger and fear and a further 30% could recognise happiness and sadness. However, reports of emotion recognition and response to emotion were partially dissociated. It was therefore decided to test facial emotion recognition directly. The participants were 58 people of all ages with PWS. They were shown a total of 20 faces, each depicting one of the six basic emotions and asked to say what they thought that person was feeling. The faces were shown one at a time in random order and each was accompanied by a reminder of the six basic emotions. This cohort of people with PWS correctly identified 55% of the different facial emotions. These included 90% of happy faces, 55% each of sad and surprised faces, 43% of disgusted faces, 40% of angry faces and 37% of fearful faces. Genetic subtype differences were found only in the predictors of recognition scores, not in the scores themselves. Selective impairment was found in fear recognition for those with PWS who had had a depressive illness and in anger recognition for those with PWS who had had a psychotic illness. The inability to read facial expressions of emotion is a deficit in social cognition apparent in people with PWS. This may be a contributing factor in their difficulties with peer relationships. © 2010 The Authors. Journal of Intellectual Disability Research © 2010 Blackwell Publishing Ltd.

  16. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    Science.gov (United States)

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  17. Facial expression recognition and histograms of oriented gradients: a comprehensive study.

    Science.gov (United States)

    Carcagnì, Pierluigi; Del Coco, Marco; Leo, Marco; Distante, Cosimo

    2015-01-01

    Automatic facial expression recognition (FER) is a topic of growing interest mainly due to the rapid spread of assistive technology applications, as human-robot interaction, where a robust emotional awareness is a key point to best accomplish the assistive task. This paper proposes a comprehensive study on the application of histogram of oriented gradients (HOG) descriptor in the FER problem, highlighting as this powerful technique could be effectively exploited for this purpose. In particular, this paper highlights that a proper set of the HOG parameters can make this descriptor one of the most suitable to characterize facial expression peculiarities. A large experimental session, that can be divided into three different phases, was carried out exploiting a consolidated algorithmic pipeline. The first experimental phase was aimed at proving the suitability of the HOG descriptor to characterize facial expression traits and, to do this, a successful comparison with most commonly used FER frameworks was carried out. In the second experimental phase, different publicly available facial datasets were used to test the system on images acquired in different conditions (e.g. image resolution, lighting conditions, etc.). As a final phase, a test on continuous data streams was carried out on-line in order to validate the system in real-world operating conditions that simulated a real-time human-machine interaction.

  18. INTEGRATED EXPRESSIONAL AND COLOR INVARIANT FACIAL RECOGNITION SCHEME FOR HUMAN BIOMETRIC SYSTEM

    Directory of Open Access Journals (Sweden)

    M.Punithavalli

    2013-09-01

    Full Text Available In many practical applications like biometrics, video surveillance and human computer interaction, face recognition plays a major role. The previous works focused on recognizing and enhancing the biometric systems based on the facial components of the system. In this work, we are going to build Integrated Expressional and Color Invariant Facial Recognition scheme for human biometric recognition suited to different security provisioning public participation areas.At first, the features of the face are identified and processed using bayes classifier with RGB and HSV color bands. Second, psychological emotional variance are identified and linked with the respective human facial expression based on the facial action code system. Finally, an integrated expressional and color invariant facial recognition is proposed for varied conditions of illumination, pose, transformation, etc. These conditions on color invariant model are suited to easy and more efficient biometric recognition system in public domain and high confidential security zones. The integration is made derived genetic operation on the color and expression components of the facial feature system. Experimental evaluation is planned to done with public face databases (DBs such as CMU-PIE, Color FERET, XM2VTSDB, SCface, and FRGC 2.0 to estimate the performance of the proposed integrated expressional facial and color invariant recognition scheme [IEFCIRS]. Performance evaluation is done based on the constraints like recognition rate, security and evalaution time.

  19. The telltale face: possible mechanisms behind defector and cooperator recognition revealed by emotional facial expression metrics.

    Science.gov (United States)

    Kovács-Bálint, Zsófia; Bereczkei, Tamás; Hernádi, István

    2013-11-01

    In this study, we investigated the role of facial cues in cooperator and defector recognition. First, a face image database was constructed from pairs of full face portraits of target subjects taken at the moment of decision-making in a prisoner's dilemma game (PDG) and in a preceding neutral task. Image pairs with no deficiencies (n = 67) were standardized for orientation and luminance. Then, confidence in defector and cooperator recognition was tested with image rating in a different group of lay judges (n = 62). Results indicate that (1) defectors were better recognized (58% vs. 47%), (2) they looked different from cooperators (p facial microexpression analysis, defection was strongly linked with depressed lower lips and less opened eyes. Significant correlation was found between the intensity of micromimics and the rating of images in the cooperator-defector dimension. In summary, facial expressions can be considered as reliable indicators of momentary social dispositions in the PDG. Females may exhibit an evolutionary-based overestimation bias to detecting social visual cues of the defector face.

  20. Recognition of facial emotion and perceived parental bonding styles in healthy volunteers and personality disorder patients.

    Science.gov (United States)

    Zheng, Leilei; Chai, Hao; Chen, Wanzhen; Yu, Rongrong; He, Wei; Jiang, Zhengyan; Yu, Shaohua; Li, Huichun; Wang, Wei

    2011-12-01

    Early parental bonding experiences play a role in emotion recognition and expression in later adulthood, and patients with personality disorder frequently experience inappropriate parental bonding styles, therefore the aim of the present study was to explore whether parental bonding style is correlated with recognition of facial emotion in personality disorder patients. The Parental Bonding Instrument (PBI) and the Matsumoto and Ekman Japanese and Caucasian Facial Expressions of Emotion (JACFEE) photo set tests were carried out in 289 participants. Patients scored lower on parental Care but higher on parental Freedom Control and Autonomy Denial subscales, and they displayed less accuracy when recognizing contempt, disgust and happiness than the healthy volunteers. In healthy volunteers, maternal Autonomy Denial significantly predicted accuracy when recognizing fear, and maternal Care predicted the accuracy of recognizing sadness. In patients, paternal Care negatively predicted the accuracy of recognizing anger, paternal Freedom Control predicted the perceived intensity of contempt, maternal Care predicted the accuracy of recognizing sadness, and the intensity of disgust. Parenting bonding styles have an impact on the decoding process and sensitivity when recognizing facial emotions, especially in personality disorder patients. © 2011 The Authors. Psychiatry and Clinical Neurosciences © 2011 Japanese Society of Psychiatry and Neurology.

  1. Facial Emotion Recognition by Persons with Mental Retardation: A Review of the Experimental Literature.

    Science.gov (United States)

    Rojahn, Johannes; And Others

    1995-01-01

    This literature review discusses 21 studies on facial emotion recognition by persons with mental retardation in terms of methodological characteristics, stimulus material, salient variables and their relation to recognition tasks, and emotion recognition deficits in mental retardation. A table provides comparative data on all 21 studies. (DB)

  2. Emotional facial expressions differentially influence predictions and performance for face recognition.

    Science.gov (United States)

    Nomi, Jason S; Rhodes, Matthew G; Cleary, Anne M

    2013-01-01

    This study examined how participants' predictions of future memory performance are influenced by emotional facial expressions. Participants made judgements of learning (JOLs) predicting the likelihood that they would correctly identify a face displaying a happy, angry, or neutral emotional expression in a future two-alternative forced-choice recognition test of identity (i.e., recognition that a person's face was seen before). JOLs were higher for studied faces with happy and angry emotional expressions than for neutral faces. However, neutral test faces with studied neutral expressions had significantly higher identity recognition rates than neutral test faces studied with happy or angry expressions. Thus, these data are the first to demonstrate that people believe happy and angry emotional expressions will lead to better identity recognition in the future relative to neutral expressions. This occurred despite the fact that neutral expressions elicited better identity recognition than happy and angry expressions. These findings contribute to the growing literature examining the interaction of cognition and emotion.

  3. Facial emotion recognition impairments are associated with brain volume abnormalities in individuals with HIV.

    Science.gov (United States)

    Clark, Uraina S; Walker, Keenan A; Cohen, Ronald A; Devlin, Kathryn N; Folkers, Anna M; Pina, Matthew J; Tashima, Karen T

    2015-04-01

    Impaired facial emotion recognition abilities in HIV+ patients are well documented, but little is known about the neural etiology of these difficulties. We examined the relation of facial emotion recognition abilities to regional brain volumes in 44 HIV-positive (HIV+) and 44 HIV-negative control (HC) adults. Volumes of structures implicated in HIV-associated neuropathology and emotion recognition were measured on MRI using an automated segmentation tool. Relative to HC, HIV+ patients demonstrated emotion recognition impairments for fearful expressions, reduced anterior cingulate cortex (ACC) volumes, and increased amygdala volumes. In the HIV+ group, fear recognition impairments correlated significantly with ACC, but not amygdala volumes. ACC reductions were also associated with lower nadir CD4 levels (i.e., greater HIV-disease severity). These findings extend our understanding of the neurobiological substrates underlying an essential social function, facial emotion recognition, in HIV+ individuals and implicate HIV-related ACC atrophy in the impairment of these abilities.

  4. Facial Emotion Recognition Impairments are Associated with Brain Volume Abnormalities in Individuals with HIV

    Science.gov (United States)

    Clark, Uraina S.; Walker, Keenan A.; Cohen, Ronald A.; Devlin, Kathryn N.; Folkers, Anna M.; Pina, Mathew M.; Tashima, Karen T.

    2015-01-01

    Impaired facial emotion recognition abilities in HIV+ patients are well documented, but little is known about the neural etiology of these difficulties. We examined the relation of facial emotion recognition abilities to regional brain volumes in 44 HIV-positive (HIV+) and 44 HIV-negative control (HC) adults. Volumes of structures implicated in HIV− associated neuropathology and emotion recognition were measured on MRI using an automated segmentation tool. Relative to HC, HIV+ patients demonstrated emotion recognition impairments for fearful expressions, reduced anterior cingulate cortex (ACC) volumes, and increased amygdala volumes. In the HIV+ group, fear recognition impairments correlated significantly with ACC, but not amygdala volumes. ACC reductions were also associated with lower nadir CD4 levels (i.e., greater HIV-disease severity). These findings extend our understanding of the neurobiological substrates underlying an essential social function, facial emotion recognition, in HIV+ individuals and implicate HIV-related ACC atrophy in the impairment of these abilities. PMID:25744868

  5. On Assisting a Visual-Facial Affect Recognition System with Keyboard-Stroke Pattern Information

    Science.gov (United States)

    Stathopoulou, I.-O.; Alepis, E.; Tsihrintzis, G. A.; Virvou, M.

    Towards realizing a multimodal affect recognition system, we are considering the advantages of assisting a visual-facial expression recognition system with keyboard-stroke pattern information. Our work is based on the assumption that the visual-facial and keyboard modalities are complementary to each other and that their combination can significantly improve the accuracy in affective user models. Specifically, we present and discuss the development and evaluation process of two corresponding affect recognition subsystems, with emphasis on the recognition of 6 basic emotional states, namely happiness, sadness, surprise, anger and disgust as well as the emotion-less state which we refer to as neutral. We find that emotion recognition by the visual-facial modality can be aided greatly by keyboard-stroke pattern information and the combination of the two modalities can lead to better results towards building a multimodal affect recognition system.

  6. Fully Automatic Recognition of the Temporal Phases of Facial Actions

    NARCIS (Netherlands)

    Valstar, M.F.; Pantic, Maja

    Past work on automatic analysis of facial expressions has focused mostly on detecting prototypic expressions of basic emotions like happiness and anger. The method proposed here enables the detection of a much larger range of facial behavior by recognizing facial muscle actions [action units (AUs)

  7. A small-world network model of facial emotion recognition.

    Science.gov (United States)

    Takehara, Takuma; Ochiai, Fumio; Suzuki, Naoto

    2016-01-01

    Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks--one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.

  8. Recognition of Facial Expressions and Prosodic Cues with Graded Emotional Intensities in Adults with Asperger Syndrome

    Science.gov (United States)

    Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-01-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…

  9. Assessing the Utility of a Virtual Environment for Enhancing Facial Affect Recognition in Adolescents with Autism

    Science.gov (United States)

    Bekele, Esubalew; Crittendon, Julie; Zheng, Zhi; Swanson, Amy; Weitlauf, Amy; Warren, Zachary; Sarkar, Nilanjan

    2014-01-01

    Teenagers with autism spectrum disorder (ASD) and age-matched controls participated in a dynamic facial affect recognition task within a virtual reality (VR) environment. Participants identified the emotion of a facial expression displayed at varied levels of intensity by a computer generated avatar. The system assessed performance (i.e.,…

  10. The Relation of Facial Affect Recognition and Empathy to Delinquency in Youth Offenders

    Science.gov (United States)

    Carr, Mary B.; Lutjemeier, John A.

    2005-01-01

    Associations among facial affect recognition, empathy, and self-reported delinquency were studied in a sample of 29 male youth offenders at a probation placement facility. Youth offenders were asked to recognize facial expressions of emotions from adult faces, child faces, and cartoon faces. Youth offenders also responded to a series of statements…

  11. Facial Action Unit Recognition using Temporal Templates and Particle Filtering with Factorized Likelihoods

    NARCIS (Netherlands)

    Valstar, Michel; Pantic, Maja; Patras, Ioannis

    2005-01-01

    Automatic recognition of human facial expressions is a challenging problem with many applications in human-computer interaction. Most of the existing facial expression analyzers succeed only in recognizing a few basic emotions, such as anger or happiness. In contrast, the system we wish to demonstra

  12. Shared Gaussian Process Latent Variable Model for Multi-view Facial Expression Recognition

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2013-01-01

    Facial-expression data often appear in multiple views either due to head-movements or the camera position. Existing methods for multi-view facial expression recognition perform classification of the target expressions either by using classifiers learned separately for each view or by using a single

  13. Multi-output Laplacian Dynamic Ordinal Regression for Facial Expression Recognition and Intensity Estimation

    NARCIS (Netherlands)

    Rudovic, Ognjen; Pavlovic, Vladimir; Pantic, Maja

    2012-01-01

    Automated facial expression recognition has received increased attention over the past two decades. Existing works in the field usually do not encode either the temporal evolution or the intensity of the observed facial displays. They also fail to jointly model multidimensional (multi-class) continu

  14. Spatiotemporal Analysis of RGB-D-T Facial Images for Multimodal Pain Level Recognition

    DEFF Research Database (Denmark)

    Irani, Ramin; Nasrollahi, Kamal; Oliu Simon, Marc

    2015-01-01

    facial images for pain detection and pain intensity level recognition. For this purpose, we extract energies released by facial pixels using a spatiotemporal filter. Experiments on a group of 12 elderly people applying the multimodal approach show that the proposed method successfully detects pain...

  15. Predicting the Accuracy of Facial Affect Recognition: The Interaction of Child Maltreatment and Intellectual Functioning

    Science.gov (United States)

    Shenk, Chad E.; Putnam, Frank W.; Noll, Jennie G.

    2013-01-01

    Previous research demonstrates that both child maltreatment and intellectual performance contribute uniquely to the accurate identification of facial affect by children and adolescents. The purpose of this study was to extend this research by examining whether child maltreatment affects the accuracy of facial recognition differently at varying…

  16. Multi-output Laplacian Dynamic Ordinal Regression for Facial Expression Recognition and Intensity Estimation

    NARCIS (Netherlands)

    Rudovic, Ognjen; Pavlovic, Vladimir; Pantic, Maja

    2012-01-01

    Automated facial expression recognition has received increased attention over the past two decades. Existing works in the field usually do not encode either the temporal evolution or the intensity of the observed facial displays. They also fail to jointly model multidimensional (multi-class)

  17. The Relation of Facial Affect Recognition and Empathy to Delinquency in Youth Offenders

    Science.gov (United States)

    Carr, Mary B.; Lutjemeier, John A.

    2005-01-01

    Associations among facial affect recognition, empathy, and self-reported delinquency were studied in a sample of 29 male youth offenders at a probation placement facility. Youth offenders were asked to recognize facial expressions of emotions from adult faces, child faces, and cartoon faces. Youth offenders also responded to a series of statements…

  18. Discriminative shared Gaussian processes for multi-view and view-invariant facial expression recognition

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers

  19. Assessing the Utility of a Virtual Environment for Enhancing Facial Affect Recognition in Adolescents with Autism

    Science.gov (United States)

    Bekele, Esubalew; Crittendon, Julie; Zheng, Zhi; Swanson, Amy; Weitlauf, Amy; Warren, Zachary; Sarkar, Nilanjan

    2014-01-01

    Teenagers with autism spectrum disorder (ASD) and age-matched controls participated in a dynamic facial affect recognition task within a virtual reality (VR) environment. Participants identified the emotion of a facial expression displayed at varied levels of intensity by a computer generated avatar. The system assessed performance (i.e.,…

  20. The Change in Facial Emotion Recognition Ability in Inpatients with Treatment Resistant Schizophrenia After Electroconvulsive Therapy.

    Science.gov (United States)

    Dalkıran, Mihriban; Tasdemir, Akif; Salihoglu, Tamer; Emul, Murat; Duran, Alaattin; Ugur, Mufit; Yavuz, Ruhi

    2017-09-01

    People with schizophrenia have impairments in emotion recognition along with other social cognitive deficits. In the current study, we aimed to investigate the immediate benefits of ECT on facial emotion recognition ability. Thirty-two treatment resistant patients with schizophrenia who have been indicated for ECT enrolled in the study. Facial emotion stimuli were a set of 56 photographs that depicted seven basic emotions: sadness, anger, happiness, disgust, surprise, fear, and neutral faces. The average age of the participants was 33.4 ± 10.5 years. The rate of recognizing the disgusted facial expression increased significantly after ECT (p changes were found in the rest of the facial expressions (p > 0.05). After the ECT, the time period of responding to the fear and happy facial expressions were significantly shorter (p < 0.05). Facial emotion recognition ability is an important social cognitive skill for social harmony, proper relation and living independently. At least, the ECT sessions do not seem to affect facial emotion recognition ability negatively and seem to improve identifying disgusted facial emotion which is related with dopamine enriched regions in brain.

  1. Recognition of Facial Expressions and Prosodic Cues with Graded Emotional Intensities in Adults with Asperger Syndrome

    Science.gov (United States)

    Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-01-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…

  2. Discriminative shared Gaussian processes for multi-view and view-invariant facial expression recognition

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2015-01-01

    Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers

  3. Predicting the Accuracy of Facial Affect Recognition: The Interaction of Child Maltreatment and Intellectual Functioning

    Science.gov (United States)

    Shenk, Chad E.; Putnam, Frank W.; Noll, Jennie G.

    2013-01-01

    Previous research demonstrates that both child maltreatment and intellectual performance contribute uniquely to the accurate identification of facial affect by children and adolescents. The purpose of this study was to extend this research by examining whether child maltreatment affects the accuracy of facial recognition differently at varying…

  4. More Pronounced Deficits in Facial Emotion Recognition for Schizophrenia than Bipolar Disorder

    Science.gov (United States)

    Goghari, Vina M; Sponheim, Scott R

    2012-01-01

    Schizophrenia and bipolar disorder are typically separated in diagnostic systems. Behavioural, cognitive, and brain abnormalities associated with each disorder nonetheless overlap. We evaluated the diagnostic specificity of facial emotion recognition deficits in schizophrenia and bipolar disorder to determine whether select aspects of emotion recognition differed for the two disorders. The investigation used an experimental task that included the same facial images in an emotion recognition condition and an age recognition condition (to control for processes associated with general face recognition) in 27 schizophrenia patients, 16 bipolar I patients, and 30 controls. Schizophrenia and bipolar patients exhibited both shared and distinct aspects of facial emotion recognition deficits. Schizophrenia patients had deficits in recognizing angry facial expressions compared to healthy controls and bipolar patients. Compared to control participants, both schizophrenia and bipolar patients were more likely to mislabel facial expressions of anger as fear. Given that schizophrenia patients exhibited a deficit in emotion recognition for angry faces, which did not appear due to generalized perceptual and cognitive dysfunction, improving recognition of threat-related expression may be an important intervention target to improve social functioning in schizophrenia. PMID:23218816

  5. Support vector machine-based facial-expression recognition method combining shape and appearance

    Science.gov (United States)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  6. Facial Affect Recognition Training Through Telepractice: Two Case Studies of Individuals with Chronic Traumatic Brain Injury.

    Science.gov (United States)

    Williamson, John; Isaki, Emi

    2015-01-01

    The use of a modified Facial Affect Recognition (FAR) training to identify emotions was investigated with two case studies of adults with moderate to severe chronic (> five years) traumatic brain injury (TBI). The modified FAR training was administered via telepractice to target social communication skills. Therapy consisted of identifying emotions through static facial expressions, personally reflecting on those emotions, and identifying sarcasm and emotions within social stories and role-play. Pre- and post-therapy measures included static facial photos to identify emotion and the Prutting and Kirchner Pragmatic Protocol for social communication. Both participants with chronic TBI showed gains on identifying facial emotions on the static photos.

  7. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    Science.gov (United States)

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions.

  8. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    Science.gov (United States)

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding.

  9. Structural correlates of facial emotion recognition deficits in Parkinson's disease patients.

    Science.gov (United States)

    Baggio, H C; Segura, B; Ibarretxe-Bilbao, N; Valldeoriola, F; Marti, M J; Compta, Y; Tolosa, E; Junqué, C

    2012-07-01

    The ability to recognize facial emotion expressions, especially negative ones, is described to be impaired in Parkinson's disease (PD) patients. Previous neuroimaging work evaluating the neural substrate of facial emotion recognition (FER) in healthy and pathological subjects has mostly focused on functional changes. This study was designed to evaluate gray matter (GM) and white matter (WM) correlates of FER in a large sample of PD. Thirty-nine PD patients and 23 healthy controls (HC) were tested with the Ekman 60 test for FER and with magnetic resonance imaging. Effects of associated depressive symptoms were taken into account. In accordance with previous studies, PD patients performed significantly worse in recognizing sadness, anger and disgust. In PD patients, voxel-based morphometry analysis revealed areas of positive correlation between individual emotion recognition and GM volume: in the right orbitofrontal cortex, amygdala and postcentral gyrus and sadness identification; in the right occipital fusiform gyrus, ventral striatum and subgenual cortex and anger identification, and in the anterior cingulate cortex (ACC) and disgust identification. WM analysis through diffusion tensor imaging revealed significant positive correlations between fractional anisotropy levels in the frontal portion of the right inferior fronto-occipital fasciculus and the performance in the identification of sadness. These findings shed light on the structural neural bases of the deficits presented by PD patients in this skill. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Using Kinect for real-time emotion recognition via facial expressions

    Institute of Scientific and Technical Information of China (English)

    Qi-rong MAO; Xin-yu PAN; Yong-zhao ZHAN; Xiang-jun SHEN

    2015-01-01

    Emotion recognition via facial expressions (ERFE) has attracted a great deal of interest with recent advances in artificial intelligence and pattern recognition. Most studies are based on 2D images, and their performance is usually computationally expensive. In this paper, we propose a real-time emotion recognition approach based on both 2D and 3D facial expression features captured by Kinect sensors. To capture the deformation of the 3D mesh during facial expression, we combine the features of animation units (AUs) and feature point positions (FPPs) tracked by Kinect. A fusion algorithm based on improved emotional profiles (IEPs) and maximum confidence is proposed to recognize emotions with these real-time facial expression features. Experiments on both an emotion dataset and a real-time video show the superior performance of our method.

  11. Forensic facial approximation assessment: can application of different average facial tissue depth data facilitate recognition and establish acceptable level of resemblance?

    Science.gov (United States)

    Herrera, Lara Maria; Strapasson, Raíssa Ananda Paim; da Silva, Jorge Vicente Lopes; Melani, Rodolfo Francisco Haltenhoff

    2016-09-01

    Facial soft tissue thicknesses (FSTT) are important guidelines for modeling faces from skull. Amid so many FSTT data, Forensic artists have to make a subjective choice of a dataset that best meets their needs. This study investigated the performance of four FSTT datasets in the recognition and resemblance of Brazilian living individuals and the performance of assessors in recognizing people, according to sex and knowledge on Human Anatomy and Forensic Dentistry. Sixteen manual facial approximations (FAs) were constructed using three-dimensional (3D) prototypes of skulls (targets). The American method was chosen for the construction of the faces. One hundred and twenty participants evaluated all FAs by means of recognition and resemblance tests. This study showed higher proportions of recognition by FAs conducted with FSTT data from cadavers compared with those conducted with medical imaging data. Targets were also considered more similar to FAs conducted with FSTT data from cadavers. Nose and face shape, respectively, were considered the most similar regions to targets. The sex of assessors (male and female) and the knowledge on Human Anatomy and Forensic Dentistry did not play a determinant role to reach greater recognition rates. It was possible to conclude that FSTT data obtained from imaging may not facilitate recognition and establish acceptable level of resemblance. Grouping FSTT data by regions of the face, as proposed in this paper, may contribute to more accurate FAs.

  12. Anodal tDCS targeting the right orbitofrontal cortex enhances facial expression recognition.

    Science.gov (United States)

    Willis, Megan L; Murphy, Jillian M; Ridley, Nicole J; Vercammen, Ans

    2015-12-01

    The orbitofrontal cortex (OFC) has been implicated in the capacity to accurately recognise facial expressions. The aim of the current study was to determine if anodal transcranial direct current stimulation (tDCS) targeting the right OFC in healthy adults would enhance facial expression recognition, compared with a sham condition. Across two counterbalanced sessions of tDCS (i.e. anodal and sham), 20 undergraduate participants (18 female) completed a facial expression labelling task comprising angry, disgusted, fearful, happy, sad and neutral expressions, and a control (social judgement) task comprising the same expressions. Responses on the labelling task were scored for accuracy, median reaction time and overall efficiency (i.e. combined accuracy and reaction time). Anodal tDCS targeting the right OFC enhanced facial expression recognition, reflected in greater efficiency and speed of recognition across emotions, relative to the sham condition. In contrast, there was no effect of tDCS to responses on the control task. This is the first study to demonstrate that anodal tDCS targeting the right OFC boosts facial expression recognition. This finding provides a solid foundation for future research to examine the efficacy of this technique as a means to treat facial expression recognition deficits, particularly in individuals with OFC damage or dysfunction.

  13. Facial Expression Recognition Based on Features Derived From the Distinct LBP and GLCM

    Directory of Open Access Journals (Sweden)

    Gorti Satyanarayana Murty

    2014-01-01

    Full Text Available Automatic recognition of facial expressions can be an important component of natural human-machine interfaces; it may also be used in behavioural science and in clinical practice. Although humans recognise facial expressions virtually without effort or delay, reliable expression recognition by machine is still a challenge. This paper, presents recognition of facial expression by integrating the features derived from Grey Level Co-occurrence Matrix (GLCM with a new structural approach derived from distinct LBP’s (DLBP ona 3 x 3 First order Compressed Image (FCI. The proposed method precisely recognizes the 7 categories of expressions i.e.: neutral, happiness, sadness, surprise, anger, disgust and fear. The proposed method contains three phases. In the first phase each 5 x 5 sub image is compressed into a 3 x 3 sub image. The second phase derives two distinct LBP’s (DLBP using the Triangular patterns between the upper and lower parts of the 3 x 3 sub image. In the third phase GLCM is constructed based on the DLBP’s and feature parameters are evaluated for precise facial expression recognition. The derived DLBP is effective because it integrated with GLCM and provides better classification performance. The proposed method overcomes the disadvantages of statistical and formal LBP methods in estimating the facial expressions. The experimental results demonstrate the effectiveness of the proposed method on facial expression recognition.

  14. Mapping correspondence between facial mimicry and emotion recognition in healthy subjects.

    Science.gov (United States)

    Ponari, Marta; Conson, Massimiliano; D'Amico, Nunzia Pina; Grossi, Dario; Trojano, Luigi

    2012-12-01

    We aimed at verifying the hypothesis that facial mimicry is causally and selectively involved in emotion recognition. For this purpose, in Experiment 1, we explored the effect of tonic contraction of muscles in upper or lower half of participants' face on their ability to recognize emotional facial expressions. We found that the "lower" manipulation specifically impaired recognition of happiness and disgust, the "upper" manipulation impaired recognition of anger, while both manipulations affected recognition of fear; recognition of surprise and sadness were not affected by either blocking manipulations. In Experiment 2, we verified whether emotion recognition is hampered by stimuli in which an upper or lower half-face showing an emotional expression is combined with a neutral half-face. We found that the neutral lower half-face interfered with recognition of happiness and disgust, whereas the neutral upper half impaired recognition of anger; recognition of fear and sadness was impaired by both manipulations, whereas recognition of surprise was not affected by either manipulation. Taken together, the present findings support simulation models of emotion recognition and provide insight into the role of mimicry in comprehension of others' emotional facial expressions.

  15. The Moving Window Technique: A Window into Developmental Changes in Attention during Facial Emotion Recognition

    Science.gov (United States)

    Birmingham, Elina; Meixner, Tamara; Iarocci, Grace; Kanan, Christopher; Smilek, Daniel; Tanaka, James W.

    2013-01-01

    The strategies children employ to selectively attend to different parts of the face may reflect important developmental changes in facial emotion recognition. Using the Moving Window Technique (MWT), children aged 5-12 years and adults ("N" = 129) explored faces with a mouse-controlled window in an emotion recognition task. An…

  16. Facial expressions of emotions: recognition accuracy and affective reactions during late childhood.

    Science.gov (United States)

    Mancini, Giacomo; Agnoli, Sergio; Baldaro, Bruno; Bitti, Pio E Ricci; Surcinelli, Paola

    2013-01-01

    The present study examined the development of recognition ability and affective reactions to emotional facial expressions in a large sample of school-aged children (n = 504, ages 8-11 years of age). Specifically, the study aimed to investigate if changes in the emotion recognition ability and the affective reactions associated with the viewing of facial expressions occur during late childhood. Moreover, because small but robust gender differences during late-childhood have been proposed, the effects of gender on the development of emotion recognition and affective responses were examined. The results showed an overall increase in emotional face recognition ability from 8 to 11 years of age, particularly for neutral and sad expressions. However, the increase in sadness recognition was primarily due to the development of this recognition in boys. Moreover, our results indicate different developmental trends in males and females regarding the recognition of disgust. Last, developmental changes in affective reactions to emotional facial expressions were found. Whereas recognition ability increased over the developmental time period studied, affective reactions elicited by facial expressions were characterized by a decrease in arousal over the course of late childhood.

  17. Overview of impaired facial affect recognition in persons with traumatic brain injury.

    Science.gov (United States)

    Radice-Neumann, Dawn; Zupan, Barbra; Babbage, Duncan R; Willer, Barry

    2007-07-01

    To review the literature of affect recognition for persons with traumatic brain injury (TBI). It is suggested that impairment of affect recognition could be a significant problem for the TBI population and treatment strategies are recommended based on research for persons with autism. Research demonstrates that persons with TBI often have difficulty determining emotion from facial expressions. Studies show that poor interpersonal skills, which are associated with impaired affect recognition, are linked to a variety of negative outcomes. Theories suggest that facial affect recognition is achieved by interpreting important facial features and processing one's own emotions. These skills are often affected by TBI, depending on the areas damaged. Affect recognition impairments have also been identified in persons with autism. Successful interventions have already been developed for the autism population. Comparable neuroanatomical and behavioural findings between TBI and autism suggest that treatment approaches for autism may also benefit those with TBI. Impaired facial affect recognition appears to be a significant problem for persons with TBI. Theories of affect recognition, strategies used in autism and teaching techniques commonly used in TBI need to be considered when developing treatments to improve affect recognition in persons with brain injury.

  18. The Moving Window Technique: A Window into Developmental Changes in Attention during Facial Emotion Recognition

    Science.gov (United States)

    Birmingham, Elina; Meixner, Tamara; Iarocci, Grace; Kanan, Christopher; Smilek, Daniel; Tanaka, James W.

    2013-01-01

    The strategies children employ to selectively attend to different parts of the face may reflect important developmental changes in facial emotion recognition. Using the Moving Window Technique (MWT), children aged 5-12 years and adults ("N" = 129) explored faces with a mouse-controlled window in an emotion recognition task. An…

  19. The development of emotion recognition from facial expressions and non-linguistic vocalizations during childhood.

    Science.gov (United States)

    Chronaki, Georgia; Hadwin, Julie A; Garner, Matthew; Maurage, Pierre; Sonuga-Barke, Edmund J S

    2015-06-01

    Sensitivity to facial and vocal emotion is fundamental to children's social competence. Previous research has focused on children's facial emotion recognition, and few studies have investigated non-linguistic vocal emotion processing in childhood. We compared facial and vocal emotion recognition and processing biases in 4- to 11-year-olds and adults. Eighty-eight 4- to 11-year-olds and 21 adults participated. Participants viewed/listened to faces and voices (angry, happy, and sad) at three intensity levels (50%, 75%, and 100%). Non-linguistic tones were used. For each modality, participants completed an emotion identification task. Accuracy and bias for each emotion and modality were compared across 4- to 5-, 6- to 9- and 10- to 11-year-olds and adults. The results showed that children's emotion recognition improved with age; preschoolers were less accurate than other groups. Facial emotion recognition reached adult levels by 11 years, whereas vocal emotion recognition continued to develop in late childhood. Response bias decreased with age. For both modalities, sadness recognition was delayed across development relative to anger and happiness. The results demonstrate that developmental trajectories of emotion processing differ as a function of emotion type and stimulus modality. In addition, vocal emotion processing showed a more protracted developmental trajectory, compared to facial emotion processing. The results have important implications for programmes aiming to improve children's socio-emotional competence.

  20. Facial recognition and laser surface scan: a pilot study

    DEFF Research Database (Denmark)

    Lynnerup, Niels; Clausen, Maja-Lisa; Kristoffersen, Agnethe May

    2009-01-01

    Surface scanning of the face of a suspect is presented as a way to better match the facial features with those of a perpetrator from CCTV footage. We performed a simple pilot study where we obtained facial surface scans of volunteers and then in blind trials tried to match these scans with 2D...

  1. Feature Fusion Algorithm for Multimodal Emotion Recognition from Speech and Facial Expression Signal

    Directory of Open Access Journals (Sweden)

    Han Zhiyan

    2016-01-01

    Full Text Available In order to overcome the limitation of single mode emotion recognition. This paper describes a novel multimodal emotion recognition algorithm, and takes speech signal and facial expression signal as the research subjects. First, fuse the speech signal feature and facial expression signal feature, get sample sets by putting back sampling, and then get classifiers by BP neural network (BPNN. Second, measure the difference between two classifiers by double error difference selection strategy. Finally, get the final recognition result by the majority voting rule. Experiments show the method improves the accuracy of emotion recognition by giving full play to the advantages of decision level fusion and feature level fusion, and makes the whole fusion process close to human emotion recognition more, with a recognition rate 90.4%.

  2. Developmental changes in facial expression recognition in Japanese school-age children.

    Science.gov (United States)

    Naruse, Susumu; Hashimoto, Toshiaki; Mori, Kenji; Tsuda, Yoshimi; Takahara, Mitsue; Kagami, Shoji

    2013-01-01

    Facial expressions hold abundant information and play a central part in communication. In daily life, we must construct amicable interpersonal relationships by communicating through verbal and nonverbal behaviors. While school-age is a period of rapid social growth, few studies exist that study developmental changes in facial expression recognition during this age. This study investigated developmental changes in facial expression recognition by examining observers' gaze on others' expressions. 87 school-age children from first to sixth grade (41 boys, 46 girls). The Tobii T60 Eye-tracker(Tobii Technologies, Sweden) was used to gauge eye movement during a task of matching pre-instructed emotion words and facial expressions images (neutral, angry, happy, surprised, sad, disgusted) presented on a monitor fixed at a distance of 50 cm. In the task of matching the six facial expression images and emotion words, the mid- and higher-grade children answered more accurately than the lower-grade children in matching four expressions, excluding neutral and happy. For fixation time and fixation count, the lower-grade children scored lower than other grade children, gazing on all facial expressions significantly fewer times and for shorter periods. It is guessed that the stage from lower grades to middle grades is a turning point in facial recognition.

  3. Facial Expression Recognition Based on Local Binary Patterns and Kernel Discriminant Isomap

    Directory of Open Access Journals (Sweden)

    Xiaoming Zhao

    2011-10-01

    Full Text Available Facial expression recognition is an interesting and challenging subject. Considering the nonlinear manifold structure of facial images, a new kernel-based manifold learning method, called kernel discriminant isometric mapping (KDIsomap, is proposed. KDIsomap aims to nonlinearly extract the discriminant information by maximizing the interclass scatter while minimizing the intraclass scatter in a reproducing kernel Hilbert space. KDIsomap is used to perform nonlinear dimensionality reduction on the extracted local binary patterns (LBP facial features, and produce low-dimensional discrimimant embedded data representations with striking performance improvement on facial expression recognition tasks. The nearest neighbor classifier with the Euclidean metric is used for facial expression classification. Facial expression recognition experiments are performed on two popular facial expression databases, i.e., the JAFFE database and the Cohn-Kanade database. Experimental results indicate that KDIsomap obtains the best accuracy of 81.59% on the JAFFE database, and 94.88% on the Cohn-Kanade database. KDIsomap outperforms the other used methods such as principal component analysis (PCA, linear discriminant analysis (LDA, kernel principal component analysis (KPCA, kernel linear discriminant analysis (KLDA as well as kernel isometric mapping (KIsomap.

  4. Facial-affect recognition and visual scanning behaviour in the course of schizophrenia.

    Science.gov (United States)

    Streit, M; Wölwer, W; Gaebel, W

    1997-04-11

    The performance of schizophrenic in-patients in facial expression identification was assessed in an acute phase and in a partly remitted phase of the illness. During visual exploration of the face stimuli, the patient's eye movements were recorded using an infrared-corneal-reflection technique. Compared to healthy controls, patients demonstrated a significant deficit in facial-affect recognition. In addition, schizophrenics differed from controls in several eye movement parameters such as length of mean scan path and mean duration of fixation. Both the facial-affect recognition deficit and the eye movement abnormalities remained stable over time. However, performance in facial-affect recognition and eye movement abnormalities were not correlated. Patients with flattened affect showed relatively selective scan pattern characteristics. In contrast, affective flattening was not correlated with performance in facial-affect recognition. Dosage of neuroleptic medication did not affect the results. The main findings of the study suggest that schizophrenia is associated with disturbances in primarily unrelated neurocognitive operations mediating visuomotor processing and facial expression analysis. Given their time stability, the disturbances might have a trait-like character.

  5. Pose-variant facial expression recognition using an embedded image system

    Science.gov (United States)

    Song, Kai-Tai; Han, Meng-Ju; Chang, Shuo-Hung

    2008-12-01

    In recent years, one of the most attractive research areas in human-robot interaction is automated facial expression recognition. Through recognizing the facial expression, a pet robot can interact with human in a more natural manner. In this study, we focus on the facial pose-variant problem. A novel method is proposed in this paper to recognize pose-variant facial expressions. After locating the face position in an image frame, the active appearance model (AAM) is applied to track facial features. Fourteen feature points are extracted to represent the variation of facial expressions. The distance between feature points are defined as the feature values. These feature values are sent to a support vector machine (SVM) for facial expression determination. The pose-variant facial expression is classified into happiness, neutral, sadness, surprise or anger. Furthermore, in order to evaluate the performance for practical applications, this study also built a low resolution database (160x120 pixels) using a CMOS image sensor. Experimental results show that the recognition rate is 84% with the self-built database.

  6. Psychopathy and facial emotion recognition ability in patients with bipolar affective disorder with or without delinquent behaviors.

    Science.gov (United States)

    Demirel, Husrev; Yesilbas, Dilek; Ozver, Ismail; Yuksek, Erhan; Sahin, Feyzi; Aliustaoglu, Suheyla; Emul, Murat

    2014-04-01

    It is well known that patients with bipolar disorder are more prone to violence and have more criminal behaviors than general population. A strong relationship between criminal behavior and inability to empathize and imperceptions to other person's feelings and facial expressions increases the risk of delinquent behaviors. In this study, we aimed to investigate the deficits of facial emotion recognition ability in euthymic bipolar patients who committed an offense and compare with non-delinquent euthymic patients with bipolar disorder. Fifty-five euthymic patients with delinquent behaviors and 54 non-delinquent euthymic bipolar patients as a control group were included in the study. Ekman's Facial Emotion Recognition Test, sociodemographic data, Hare Psychopathy Checklist, Hamilton Depression Rating Scale and Young Mania Rating Scale were applied to both groups. There were no significant differences between case and control groups in the meaning of average age, gender, level of education, mean age onset of disease and suicide attempt (p>0.05). The three types of most committed delinquent behaviors in patients with euthymic bipolar disorder were as follows: injury (30.8%), threat or insult (20%) and homicide (12.7%). The best accurate percentage of identified facial emotion was "happy" (>99%, for both) while the worst misidentified facial emotion was "fear" in both groups (fear expressions was significantly worse in the case group than in the control group (pfear, disgusted and angry expressions had been significantly longer in the case group than in the control group (pfearful and modestly anger facial emotions and need some more time to response facial emotions even in remission. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Contributions of feature shapes and surface cues to the recognition and neural representation of facial identity.

    Science.gov (United States)

    Andrews, Timothy J; Baseler, Heidi; Jenkins, Rob; Burton, A Mike; Young, Andrew W

    2016-10-01

    A full understanding of face recognition will involve identifying the visual information that is used to discriminate different identities and how this is represented in the brain. The aim of this study was to explore the importance of shape and surface properties in the recognition and neural representation of familiar faces. We used image morphing techniques to generate hybrid faces that mixed shape properties (more specifically, second order spatial configural information as defined by feature positions in the 2D-image) from one identity and surface properties from a different identity. Behavioural responses showed that recognition and matching of these hybrid faces was primarily based on their surface properties. These behavioural findings contrasted with neural responses recorded using a block design fMRI adaptation paradigm to test the sensitivity of Haxby et al.'s (2000) core face-selective regions in the human brain to the shape or surface properties of the face. The fusiform face area (FFA) and occipital face area (OFA) showed a lower response (adaptation) to repeated images of the same face (same shape, same surface) compared to different faces (different shapes, different surfaces). From the behavioural data indicating the critical contribution of surface properties to the recognition of identity, we predicted that brain regions responsible for familiar face recognition should continue to adapt to faces that vary in shape but not surface properties, but show a release from adaptation to faces that vary in surface properties but not shape. However, we found that the FFA and OFA showed an equivalent release from adaptation to changes in both shape and surface properties. The dissociation between the neural and perceptual responses suggests that, although they may play a role in the process, these core face regions are not solely responsible for the recognition of facial identity.

  8. Effects of Spatial Frequencies on Recognition of Facial Identity and Facial Expression%空间频率信息对面孔身份与表情识别的影响

    Institute of Scientific and Technical Information of China (English)

    汪亚珉; 王志贤; 黄雅梅; 蒋静; 丁锦红

    2011-01-01

    已有面孔身份与表情识别研究提示,高频空间信息可能选择性地与表情识别有关,而低频空间信息则选择性地与身份识别有关.为验证这一假设,操纵空间频率设计三个Garner效应测量实验.实验1测量全频条件下身份表情识别之间的Garner效应,结果显示,相互间的干扰均显著.实验2测量高频条件下的干扰效应,发现表情识别的Garner效应不再显著而身份识别的Garner效应无明显变化,出现分离.实验3测量低频条件下的Garner效应,结果表明,表情与身份识别的Garner效应仍显著,未受高频过滤影响.基于Garner范式,提出面孔识别的可分离度与难度双指标同时考察的方法,对实验结果进行了分析,并由此得出结论,高频空间信息是面孔身份与表情信息分离的有效尺度.%By changing configural or featural/category information, White (2002) revealed that configural changes mainly interfered with facial identity processing while featural alterations largely reduced facial expression processing. With this technique, Goffaux, Hault, Michel, Vuongo, and Rossion (2005) presented that low spatial frequency played in configural changes detection. whereas featural changes detection depended on high spatial frequency. Based on these two studies, we can draw a conclusion that low spatial frequency plays an important role in facial identity recognition while high spatial frequency plays in facial expression recognition. Can this conclusion be really supported by experiments?To test this hypothesis, we conducted three Garner experiments in current study. In terms of the hypothesis,high spatial frequency enhances facial expression recognition but not facial identity recognition, while low spatial frequency facilitates facial identity recognition but not facial expression recognition. Dissociation could be found in recognition of facial identity and facial expression.Three Garner experiments were performed on 96

  9. Facial emotion recognition in euthymic patients with bipolar disorder and their unaffected first-degree relatives.

    Science.gov (United States)

    de Brito Ferreira Fernandes, Francy; Gigante, Alexandre Duarte; Berutti, Mariangeles; Amaral, José Antônio; de Almeida, Karla Mathias; de Almeida Rocca, Cristiana Castanho; Lafer, Beny; Nery, Fabiano Gonçalves

    2016-07-01

    Facial emotion recognition (FER) is an important task associated with social cognition because facial expression is a significant source of non-verbal information that guides interpersonal relationships. Increasing evidence suggests that bipolar disorder (BD) patients present deficits in FER and these deficits may be present in individuals at high genetic risk for BD. The aim of this study was to evaluate the occurrence of FER deficits in euthymic BD patients, their first-degree relatives, and healthy controls (HC) and to consider if these deficits might be regarded as an endophenotype candidate for BD. We studied 23 patients with DSM-IV BD type I, 22 first-degree relatives of these patients, and 27 HC. We used the Penn Emotion Recognition Tests to evaluate tasks of FER, emotion discrimination, and emotional acuity. Patients were recruited from outpatient facilities at the Institute of Psychiatry of the University of Sao Paulo Medical School, or from the community through media advertisements, had to be euthymic, with age above 18years old and a diagnosis of DSM-IV BD type I. Euthymic BD patients presented significantly fewer correct responses for fear, and significantly increased time to response to recognize happy faces when compared with HC, but not when compared with first-degree relatives. First-degree relatives did not significantly differ from HC on any of the emotion recognition tasks. Our results suggest that deficits in FER are present in euthymic patients, but not in subjects at high genetic risk for BD. Thus, we have not found evidence to consider FER as an endophenotype candidate for BD. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Facial Affect Recognition Training Through Telepractice: Two Case Studies of Individuals with Chronic Traumatic Brain Injury

    OpenAIRE

    John Williamson; Emi Isaki

    2015-01-01

    The use of a modified Facial Affect Recognition (FAR) training to identify emotions was investigated with two case studies of adults with moderate to severe chronic (> five years) traumatic brain injury (TBI).  The modified FAR training was administered via telepractice to target social communication skills.  Therapy consisted of identifying emotions through static facial expressions, personally reflecting on those emotions, and identifying sarcasm and emotions within social stories and ro...

  11. Facial Affect Recognition Training Through Telepractice: Two Case Studies of Individuals with Chronic Traumatic Brain Injury

    OpenAIRE

    Williamson, John; ISAKI, EMI

    2015-01-01

    The use of a modified Facial Affect Recognition (FAR) training to identify emotions was investigated with two case studies of adults with moderate to severe chronic (> five years) traumatic brain injury (TBI). The modified FAR training was administered via telepractice to target social communication skills. Therapy consisted of identifying emotions through static facial expressions, personally reflecting on those emotions, and identifying sarcasm and emotions within social stories and role-pl...

  12. Is facial emotion recognition impairment in schizophrenia identical for different emotions? A signal detection analysis.

    Science.gov (United States)

    Tsoi, Daniel T; Lee, Kwang-Hyuk; Khokhar, Waqqas A; Mir, Nusrat U; Swalli, Jaspal S; Gee, Kate A; Pluck, Graham; Woodruff, Peter W R

    2008-02-01

    Patients with schizophrenia have difficulty recognising the emotion that corresponds to a given facial expression. According to signal detection theory, two separate processes are involved in facial emotion perception: a sensory process (measured by sensitivity which is the ability to distinguish one facial emotion from another facial emotion) and a cognitive decision process (measured by response criterion which is the tendency to judge a facial emotion as a particular emotion). It is uncertain whether facial emotion recognition deficits in schizophrenia are primarily due to impaired sensitivity or response bias. In this study, we hypothesised that individuals with schizophrenia would have both diminished sensitivity and different response criteria in facial emotion recognition across different emotions compared with healthy controls. Twenty-five individuals with a DSM-IV diagnosis of schizophrenia were compared with age and IQ matched healthy controls. Participants performed a "yes-no" task by indicating whether the 88 Ekman faces shown briefly expressed one of the target emotions in three randomly ordered runs (happy, sad and fear). Sensitivity and response criteria for facial emotion recognition was calculated as d-prime and In(beta) respectively using signal detection theory. Patients with schizophrenia showed diminished sensitivity (d-prime) in recognising happy faces, but not faces that expressed fear or sadness. By contrast, patients exhibited a significantly less strict response criteria (In(beta)) in recognising fearful and sad faces. Our results suggest that patients with schizophrenia have a specific deficit in recognising happy faces, whereas they were more inclined to attribute any facial emotion as fearful or sad.

  13. Empathy, but not mimicry restriction, influences the recognition of change in emotional facial expressions.

    Science.gov (United States)

    Kosonogov, Vladimir; Titova, Alisa; Vorobyeva, Elena

    2015-01-01

    The current study addressed the hypothesis that empathy and the restriction of facial muscles of observers can influence recognition of emotional facial expressions. A sample of 74 participants recognized the subjective onset of emotional facial expressions (anger, disgust, fear, happiness, sadness, surprise, and neutral) in a series of morphed face photographs showing a gradual change (frame by frame) from one expression to another. The high-empathy (as measured by the Empathy Quotient) participants recognized emotional facial expressions at earlier photographs from the series than did low-empathy ones, but there was no difference in the exploration time. Restriction of facial muscles of observers (with plasters and a stick in mouth) did not influence the responses. We discuss these findings in the context of the embodied simulation theory and previous data on empathy.

  14. Facial emotion recognition in Williams syndrome and Down syndrome: A matching and developmental study.

    Science.gov (United States)

    Martínez-Castilla, Pastora; Burt, Michael; Borgatti, Renato; Gagliardi, Chiara

    2015-01-01

    In this study both the matching and developmental trajectories approaches were used to clarify questions that remain open in the literature on facial emotion recognition in Williams syndrome (WS) and Down syndrome (DS). The matching approach showed that individuals with WS or DS exhibit neither proficiency for the expression of happiness nor specific impairments for negative emotions. Instead, they present the same pattern of emotion recognition as typically developing (TD) individuals. Thus, the better performance on the recognition of positive compared to negative emotions usually reported in WS and DS is not specific of these populations but seems to represent a typical pattern. Prior studies based on the matching approach suggested that the development of facial emotion recognition is delayed in WS and atypical in DS. Nevertheless, and even though performance levels were lower in DS than in WS, the developmental trajectories approach used in this study evidenced that not only individuals with DS but also those with WS present atypical development in facial emotion recognition. Unlike in the TD participants, where developmental changes were observed along with age, in the WS and DS groups, the development of facial emotion recognition was static. Both individuals with WS and those with DS reached an early maximum developmental level due to cognitive constraints.

  15. Facial emotion recognition, face scan paths, and face perception in children with neurofibromatosis type 1.

    Science.gov (United States)

    Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M

    2017-05-01

    This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Discriminative shared Gaussian processes for multiview and view-invariant facial expression recognition.

    Science.gov (United States)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2015-01-01

    Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers learned separately for each view or a single classifier learned for all views. However, these approaches ignore the fact that different views of a facial expression are just different manifestations of the same facial expression. By accounting for this redundancy, we can design more effective classifiers for the target task. To this end, we propose a discriminative shared Gaussian process latent variable model (DS-GPLVM) for multiview and view-invariant classification of facial expressions from multiple views. In this model, we first learn a discriminative manifold shared by multiple views of a facial expression. Subsequently, we perform facial expression classification in the expression manifold. Finally, classification of an observed facial expression is carried out either in the view-invariant manner (using only a single view of the expression) or in the multiview manner (using multiple views of the expression). The proposed model can also be used to perform fusion of different facial features in a principled manner. We validate the proposed DS-GPLVM on both posed and spontaneously displayed facial expressions from three publicly available datasets (MultiPIE, labeled face parts in the wild, and static facial expressions in the wild). We show that this model outperforms the state-of-the-art methods for multiview and view-invariant facial expression classification, and several state-of-the-art methods for multiview learning and feature fusion.

  17. Event-related theta synchronization predicts deficit in facial affect recognition in schizophrenia.

    Science.gov (United States)

    Csukly, Gábor; Stefanics, Gábor; Komlósi, Sarolta; Czigler, István; Czobor, Pál

    2014-02-01

    Growing evidence suggests that abnormalities in the synchronized oscillatory activity of neurons in schizophrenia may lead to impaired neural activation and temporal coding and thus lead to neurocognitive dysfunctions, such as deficits in facial affect recognition. To gain an insight into the neurobiological processes linked to facial affect recognition, we investigated both induced and evoked oscillatory activity by calculating the Event Related Spectral Perturbation (ERSP) and the Inter Trial Coherence (ITC) during facial affect recognition. Fearful and neutral faces as well as nonface patches were presented to 24 patients with schizophrenia and 24 matched healthy controls while EEG was recorded. The participants' task was to recognize facial expressions. Because previous findings with healthy controls showed that facial feature decoding was associated primarily with oscillatory activity in the theta band, we analyzed ERSP and ITC in this frequency band in the time interval of 140-200 ms, which corresponds to the N170 component. Event-related theta activity and phase-locking to facial expressions, but not to nonface patches, predicted emotion recognition performance in both controls and patients. Event-related changes in theta amplitude and phase-locking were found to be significantly weaker in patients compared with healthy controls, which is in line with previous investigations showing decreased neural synchronization in the low frequency bands in patients with schizophrenia. Neural synchrony is thought to underlie distributed information processing. Our results indicate a less effective functioning in the recognition process of facial features, which may contribute to a less effective social cognition in schizophrenia.

  18. Recognition of Facial Expressions of Different Emotional Intensities in Patients with Frontotemporal Lobar Degeneration

    Directory of Open Access Journals (Sweden)

    Roy P. C. Kessels

    2007-01-01

    Full Text Available Behavioural problems are a key feature of frontotemporal lobar degeneration (FTLD. Also, FTLD patients show impairments in emotion processing. Specifically, the perception of negative emotional facial expressions is affected. Generally, however, negative emotional expressions are regarded as more difficult to recognize than positive ones, which thus may have been a confounding factor in previous studies. Also, ceiling effects are often present on emotion recognition tasks using full-blown emotional facial expressions. In the present study with FTLD patients, we examined the perception of sadness, anger, fear, happiness, surprise and disgust at different emotional intensities on morphed facial expressions to take task difficulty into account. Results showed that our FTLD patients were specifically impaired at the recognition of the emotion anger. Also, the patients performed worse than the controls on recognition of surprise, but performed at control levels on disgust, happiness, sadness and fear. These findings corroborate and extend previous results showing deficits in emotion perception in FTLD.

  19. Feature Extraction for Facial Expression Recognition based on Hybrid Face Regions

    Directory of Open Access Journals (Sweden)

    LAJEVARDI, S.M.

    2009-10-01

    Full Text Available Facial expression recognition has numerous applications, including psychological research, improved human computer interaction, and sign language translation. A novel facial expression recognition system based on hybrid face regions (HFR is investigated. The expression recognition system is fully automatic, and consists of the following modules: face detection, facial detection, feature extraction, optimal features selection, and classification. The features are extracted from both whole face image and face regions (eyes and mouth using log Gabor filters. Then, the most discriminate features are selected based on mutual information criteria. The system can automatically recognize six expressions: anger, disgust, fear, happiness, sadness and surprise. The selected features are classified using the Naive Bayesian (NB classifier. The proposed method has been extensively assessed using Cohn-Kanade database and JAFFE database. The experiments have highlighted the efficiency of the proposed HFR method in enhancing the classification rate.

  20. Robust Facial Expression Recognition via Sparse Representation and Multiple Gabor filters

    Directory of Open Access Journals (Sweden)

    Rania Salah El-Sayed

    2013-04-01

    Full Text Available Facial expressions recognition plays important role in human communication. It has become one of the most challenging tasks in the pattern recognition field. It has many applications such as: human computer interaction, video surveillance, forensic applications, criminal investigations, and in many other fields. In this paper we propose a method for facial expression recognition (FER. This method provides new insights into two issues in FER: feature extraction and robustness. For feature extraction we are using sparse representation approach after applying multiple Gabor filter and then using support vector machine (SVM as classifier. We conduct extensive experiments on standard facial expressions database to verify the performance of proposed method. And we compare the result with other approach.

  1. Discriminability effect on Garner interference: evidence from recognition of facial identity and expression

    Directory of Open Access Journals (Sweden)

    Yamin eWang

    2013-12-01

    Full Text Available Using Garner’s speeded classification task existing studies demonstrated an asymmetric interference in the recognition of facial identity and facial expression. It seems that expression is hard to interfere with identity recognition. However, discriminability of identity and expression, a potential confounding variable, had not been carefully examined in existing studies. In current work, we manipulated discriminability of identity and expression by matching facial shape (long or round in identity and matching mouth (opened or closed in facial expression. Garner interference was found either from identity to expression (Experiment 1 or from expression to identity (Experiment 2. Interference was also found in both directions (Experiment 3 or in neither direction (Experiment 4. The results support that Garner interference tends to occur under condition of low discriminability of relevant dimension regardless of facial property. Our findings indicate that Garner interference is not necessarily related to interdependent processing in recognition of facial identity and expression. The findings also suggest that discriminability as a mediating factor should be carefully controlled in future research.

  2. Algorithms for Facial Expression Action Tracking and Facial Expression Recognition%人脸表情运动跟踪与表情识别算法

    Institute of Scientific and Technical Information of China (English)

    李於俊; 汪增福

    2011-01-01

    对于人脸视频中的每一帧,提出一种静态人脸表情识别算法,人脸表情运动参数被提取出来后,根据表情生理知识来分类表情;为了应对知识的不足,提出一种静态表情识别和动态表情识别相结合的算法,以基于多类表情马尔可夫链和粒子滤波的统计框架结合生理知识来同时提取人脸表情运动和识别表情.实验证明了算法的有效性.%For each frame in the facial video sequence, an algorithm for static facial expression recognition is proposed firstly, facial expression is recognized after facial actions are retrieved according to facial expression knowledge. Coping with lacking of knowledge , an algorithm combining static facial expression recognition and dynamic facial expression recognition is proposed, facial actions as well as facial expression are simultaneously retrieved using a stochastic framework based on multi-class expressional Markov chains, particle filter and facial expression knowledge. Experiment result confirms the effective of these algorithms.

  3. Automated Facial Expression Recognition Using Gradient-Based Ternary Texture Patterns

    Directory of Open Access Journals (Sweden)

    Faisal Ahmed

    2013-01-01

    Full Text Available Recognition of human expression from facial image is an interesting research area, which has received increasing attention in the recent years. A robust and effective facial feature descriptor is the key to designing a successful expression recognition system. Although much progress has been made, deriving a face feature descriptor that can perform consistently under changing environment is still a difficult and challenging task. In this paper, we present the gradient local ternary pattern (GLTP—a discriminative local texture feature for representing facial expression. The proposed GLTP operator encodes the local texture of an image by computing the gradient magnitudes of the local neighborhood and quantizing those values in three discrimination levels. The location and occurrence information of the resulting micropatterns is then used as the face feature descriptor. The performance of the proposed method has been evaluated for the person-independent face expression recognition task. Experiments with prototypic expression images from the Cohn-Kanade (CK face expression database validate that the GLTP feature descriptor can effectively encode the facial texture and thus achieves improved recognition performance than some well-known appearance-based facial features.

  4. Brain Network Involved in the Recognition of Facial Expressions of Emotion in the Early Blind

    Directory of Open Access Journals (Sweden)

    Ryo Kitada

    2011-10-01

    Full Text Available Previous studies suggest that the brain network responsible for the recognition of facial expressions of emotion (FEEs begins to emerge early in life. However, it has been unclear whether visual experience of faces is necessary for the development of this network. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI experiments to test the hypothesis that the brain network underlying the recognition of FEEs is not dependent on visual experience of faces. Early-blind, late-blind and sighted subjects participated in the psychophysical experiment. Regardless of group, subjects haptically identified basic FEEs at above-chance levels, without any feedback training. In the subsequent fMRI experiment, the early-blind and sighted subjects haptically identified facemasks portraying three different FEEs and casts of three different shoe types. The sighted subjects also completed a visual task that compared the same stimuli. Within the brain regions activated by the visually-identified FEEs (relative to shoes, haptic identification of FEEs (relative to shoes by the early-blind and sighted individuals activated the posterior middle temporal gyrus adjacent to the superior temporal sulcus, the inferior frontal gyrus, and the fusiform gyrus. Collectively, these results suggest that the brain network responsible for FEE recognition can develop without any visual experience of faces.

  5. Binary pattern flavored feature extractors for Facial Expression Recognition: An overview

    DEFF Research Database (Denmark)

    Kristensen, Rasmus Lyngby; Tan, Zheng-Hua; Ma, Zhanyu

    2015-01-01

    This paper conducts a survey of modern binary pattern flavored feature extractors applied to the Facial Expression Recognition (FER) problem. In total, 26 different feature extractors are included, of which six are selected for in depth description. In addition, the paper unifies important FER...... terminology, describes open challenges, and provides recommendations to scientific evaluation of FER systems. Lastly, it studies the facial expression recognition accuracy and blur invariance of the Local Frequency Descriptor. The paper seeks to bring together disjointed studies, and the main contribution...

  6. Emotion Index of Cover Song Music Video Clips based on Facial Expression Recognition

    DEFF Research Database (Denmark)

    Vidakis, Nikolaos; Kavallakis, George; Triantafyllidis, Georgios

    2017-01-01

    This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use...... of a neural network system using the features extracted by the SIFT algorithm. Also we support the need of this fusion of different expression recognition algorithms, because of the way that emotions are linked to facial expressions in music video clips....

  7. Facial Analysis: Looking at Biometric Recognition and Genome-Wide Association

    DEFF Research Database (Denmark)

    Fagertun, Jens

    The goal of this Ph.D. project is to present selected challenges regarding facial analysis within the fields of Human Biometrics and Human Genetics. In the course of the Ph.D. nine papers have been produced, eight of which have been included in this thesis. Three of the papers focus on face...... and gender recognition, where in the gender recognition papers the process of human perception of gender is analyzed and used to improve machine learning algorithms. One paper addresses the issues of variability in human annotation of facial landmarks, which most papers regard as a static “gold standard...

  8. Oxytocin promotes facial emotion recognition and amygdala reactivity in adults with asperger syndrome.

    Science.gov (United States)

    Domes, Gregor; Kumbier, Ekkehardt; Heinrichs, Markus; Herpertz, Sabine C

    2014-02-01

    The neuropeptide oxytocin has recently been shown to enhance eye gaze and emotion recognition in healthy men. Here, we report a randomized double-blind, placebo-controlled trial that examined the neural and behavioral effects of a single dose of intranasal oxytocin on emotion recognition in individuals with Asperger syndrome (AS), a clinical condition characterized by impaired eye gaze and facial emotion recognition. Using functional magnetic resonance imaging, we examined whether oxytocin would enhance emotion recognition from facial sections of the eye vs the mouth region and modulate regional activity in brain areas associated with face perception in both adults with AS, and a neurotypical control group. Intranasal administration of the neuropeptide oxytocin improved performance in a facial emotion recognition task in individuals with AS. This was linked to increased left amygdala reactivity in response to facial stimuli and increased activity in the neural network involved in social cognition. Our data suggest that the amygdala, together with functionally associated cortical areas mediate the positive effect of oxytocin on social cognitive functioning in AS.

  9. Sex Differences in Facial, Prosodic, and Social Context Emotional Recognition in Early-Onset Schizophrenia

    Directory of Open Access Journals (Sweden)

    Julieta Ramos-Loyo

    2012-01-01

    Full Text Available The purpose of the present study was to determine sex differences in facial, prosodic, and social context emotional recognition in schizophrenia (SCH. Thirty-eight patients (SCH, 20 females and 38 healthy controls (CON, 20 females participated in the study. Clinical scales (BPRS and PANSS and an Affective States Scale were applied, as well as tasks to evaluate facial, prosodic, and within a social context emotional recognition. SCH showed lower accuracy and longer response times than CON, but no significant sex differences were observed in either facial or prosody recognition. In social context emotions, however, females showed higher empathy than males with respect to happiness in both groups. SCH reported being more identified with sad films than CON and females more with fear than males. The results of this study confirm the deficits of emotional recognition in male and female patients with schizophrenia compared to healthy subjects. Sex differences were detected in relation to social context emotions and facial and prosodic recognition depending on age.

  10. EMOTION RECOGNITION OF VIRTUAL AGENTS FACIAL EXPRESSIONS: THE EFFECTS OF AGE AND EMOTION INTENSITY

    Science.gov (United States)

    Beer, Jenay M.; Fisk, Arthur D.; Rogers, Wendy A.

    2014-01-01

    People make determinations about the social characteristics of an agent (e.g., robot or virtual agent) by interpreting social cues displayed by the agent, such as facial expressions. Although a considerable amount of research has been conducted investigating age-related differences in emotion recognition of human faces (e.g., Sullivan, & Ruffman, 2004), the effect of age on emotion identification of virtual agent facial expressions has been largely unexplored. Age-related differences in emotion recognition of facial expressions are an important factor to consider in the design of agents that may assist older adults in a recreational or healthcare setting. The purpose of the current research was to investigate whether age-related differences in facial emotion recognition can extend to emotion-expressive virtual agents. Younger and older adults performed a recognition task with a virtual agent expressing six basic emotions. Larger age-related differences were expected for virtual agents displaying negative emotions, such as anger, sadness, and fear. In fact, the results indicated that older adults showed a decrease in emotion recognition accuracy for a virtual agent's emotions of anger, fear, and happiness. PMID:25552896

  11. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    Science.gov (United States)

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  12. Sex Differences in Facial, Prosodic, and Social Context Emotional Recognition in Early-Onset Schizophrenia

    Science.gov (United States)

    Ramos-Loyo, Julieta; Mora-Reynoso, Leonor; Sánchez-Loyo, Luis Miguel; Medina-Hernández, Virginia

    2012-01-01

    The purpose of the present study was to determine sex differences in facial, prosodic, and social context emotional recognition in schizophrenia (SCH). Thirty-eight patients (SCH, 20 females) and 38 healthy controls (CON, 20 females) participated in the study. Clinical scales (BPRS and PANSS) and an Affective States Scale were applied, as well as tasks to evaluate facial, prosodic, and within a social context emotional recognition. SCH showed lower accuracy and longer response times than CON, but no significant sex differences were observed in either facial or prosody recognition. In social context emotions, however, females showed higher empathy than males with respect to happiness in both groups. SCH reported being more identified with sad films than CON and females more with fear than males. The results of this study confirm the deficits of emotional recognition in male and female patients with schizophrenia compared to healthy subjects. Sex differences were detected in relation to social context emotions and facial and prosodic recognition depending on age. PMID:22970365

  13. Recognition of Facial Expressions in Individuals with Elevated Levels of Depressive Symptoms: An Eye-Movement Study

    OpenAIRE

    2012-01-01

    Previous studies consistently reported abnormal recognition of facial expressions in depression. However, it is still not clear whether this abnormality is due to an enhanced or impaired ability to recognize facial expressions, and what underlying cognitive systems are involved. The present study aimed to examine how individuals with elevated levels of depressive symptoms differ from controls on facial expression recognition and to assess attention and information processing using eye trackin...

  14. Non-suicidal self-injury and emotion regulation: a review on facial emotion recognition and facial mimicry

    Science.gov (United States)

    2013-01-01

    Non-suicidal self-injury (NSSI) is an increasingly prevalent, clinically significant behavior in adolescents and can be associated with serious consequences for the afflicted person. Emotion regulation is considered its most frequent function. Because the symptoms of NSSI are common and cause impairment, it will be included in Section 3 disorders as a new disorder in the revised Diagnostic and Statistical Manual of Mental Disorders (DSM-5). So far, research has been conducted mostly with patients with borderline personality disorder (BPD) showing self-injurious behavior. Therefore, for this review the current state of research regarding emotion regulation, NSSI, and BPD in adolescents is presented. In particular, the authors focus on studies on facial emotion recognition and facial mimicry, as social interaction difficulties might be a result of not recognizing emotions in facial expressions and inadequate facial mimicry. Although clinical trials investigating the efficacy of psychological treatments for NSSI among adolescents are lacking, especially those targeting the capacity to cope with emotions, clinical implications of the improvement in implicit and explicit emotion regulation in the treatment of NSSI is discussed. Given the impact of emotion regulation skills on the effectiveness of psychotherapy, neurobiological and psychophysiological outcome variables should be included in clinical trials. PMID:23421964

  15. Childhood Facial Recognition Predicts Adolescent Symptom Severity in Autism Spectrum Disorder.

    Science.gov (United States)

    Eussen, Mart L J M; Louwerse, Anneke; Herba, Catherine M; Van Gool, Arthur R; Verheij, Fop; Verhulst, Frank C; Greaves-Lord, Kirstin

    2015-06-01

    Limited accuracy and speed in facial recognition (FR) and in the identification of facial emotions (IFE) have been shown in autism spectrum disorders (ASD). This study aimed at evaluating the predictive value of atypicalities in FR and IFE for future symptom severity in children with ASD. Therefore we performed a seven-year follow-up study in 87 children with ASD. FR and IFE were assessed in childhood (T1: age 6-12) using the Amsterdam Neuropsychological Tasks (ANT). Symptom severity was assessed using the Autism Diagnostic Observation Schedule (ADOS) in childhood and again seven years later during adolescence (T2: age 12-19). Multiple regression analyses were performed to investigate whether FR and IFE in childhood predicted ASD symptom severity in adolescence, while controlling for ASD symptom severity in childhood. We found that more accurate FR significantly predicted lower adolescent ASD symptom severity scores (ΔR(2) = .09), even when controlling for childhood ASD symptom severity. IFE was not a significant predictor of ASD symptom severity in adolescence. From these results it can be concluded, that in children with ASD the accuracy of FR in childhood is a relevant predictor of ASD symptom severity in adolescence. Test results on FR in children with ASD may have prognostic value regarding later symptom severity.

  16. Poor Facial Affect Recognition among Boys with Duchenne Muscular Dystrophy

    Science.gov (United States)

    Hinton, V. J.; Fee, R. J.; De Vivo, D. C.; Goldstein, E.

    2007-01-01

    Children with Duchenne or Becker muscular dystrophy (MD) have delayed language and poor social skills and some meet criteria for Pervasive Developmental Disorder, yet they are identified by molecular, rather than behavioral, characteristics. To determine whether comprehension of facial affect is compromised in boys with MD, children were given a…

  17. Poor Facial Affect Recognition among Boys with Duchenne Muscular Dystrophy

    Science.gov (United States)

    Hinton, V. J.; Fee, R. J.; De Vivo, D. C.; Goldstein, E.

    2007-01-01

    Children with Duchenne or Becker muscular dystrophy (MD) have delayed language and poor social skills and some meet criteria for Pervasive Developmental Disorder, yet they are identified by molecular, rather than behavioral, characteristics. To determine whether comprehension of facial affect is compromised in boys with MD, children were given a…

  18. I feel your fear: shared touch between faces facilitates recognition of fearful facial expressions.

    Science.gov (United States)

    Maister, Lara; Tsiakkas, Eleni; Tsakiris, Manos

    2013-02-01

    Embodied simulation accounts of emotion recognition claim that we vicariously activate somatosensory representations to simulate, and eventually understand, how others feel. Interestingly, mirror-touch synesthetes, who experience touch when observing others being touched, show both enhanced somatosensory simulation and superior recognition of emotional facial expressions. We employed synchronous visuotactile stimulation to experimentally induce a similar experience of "mirror touch" in nonsynesthetic participants. Seeing someone else's face being touched at the same time as one's own face results in the "enfacement illusion," which has been previously shown to blur self-other boundaries. We demonstrate that the enfacement illusion also facilitates emotion recognition, and, importantly, this facilitatory effect is specific to fearful facial expressions. Shared synchronous multisensory experiences may experimentally facilitate somatosensory simulation mechanisms involved in the recognition of fearful emotional expressions.

  19. Visual Scanning in the Recognition of Facial Affect in Traumatic Brain Injury

    Directory of Open Access Journals (Sweden)

    Suzane Vassallo

    2011-05-01

    Full Text Available We investigated the visual scanning strategy employed by a group of individuals with a severe traumatic brain injury (TBI during a facial affect recognition task. Four males with a severe TBI were matched for age and gender with 4 healthy controls. Eye movements were recorded while pictures of static emotional faces were viewed (i.e., sad, happy, angry, disgusted, anxious, surprised. Groups were compared with respect to accuracy in labelling the emotional facial expression, reaction time, number and duration of fixations to internal (i.e., eyes + nose + mouth, and external (i.e., all remaining regions of the stimulus. TBI participants demonstrated significantly reduced accuracy and increased latency in facial affect recognition. Further, they demonstrated no significant difference in the number or duration of fixations to internal versus external facial regions. Control participants, however, fixated more frequently and for longer periods of time upon internal facial features. Impaired visual scanning can contribute to inaccurate interpretation of facial expression and this can disrupt interpersonal communication. The scanning strategy demonstrated by our TBI group appears more ‘widespread’ than that employed by their normal counterparts. Further work is required to elucidate the nature of the scanning strategy used and its potential variance in TBI.

  20. Modulation of α power and functional connectivity during facial affect recognition.

    Science.gov (United States)

    Popov, Tzvetan; Miller, Gregory A; Rockstroh, Brigitte; Weisz, Nathan

    2013-04-03

    Research has linked oscillatory activity in the α frequency range, particularly in sensorimotor cortex, to processing of social actions. Results further suggest involvement of sensorimotor α in the processing of facial expressions, including affect. The sensorimotor face area may be critical for perception of emotional face expression, but the role it plays is unclear. The present study sought to clarify how oscillatory brain activity contributes to or reflects processing of facial affect during changes in facial expression. Neuromagnetic oscillatory brain activity was monitored while 30 volunteers viewed videos of human faces that changed their expression from neutral to fearful, neutral, or happy expressions. Induced changes in α power during the different morphs, source analysis, and graph-theoretic metrics served to identify the role of α power modulation and cross-regional coupling by means of phase synchrony during facial affect recognition. Changes from neutral to emotional faces were associated with a 10-15 Hz power increase localized in bilateral sensorimotor areas, together with occipital power decrease, preceding reported emotional expression recognition. Graph-theoretic analysis revealed that, in the course of a trial, the balance between sensorimotor power increase and decrease was associated with decreased and increased transregional connectedness as measured by node degree. Results suggest that modulations in α power facilitate early registration, with sensorimotor cortex including the sensorimotor face area largely functionally decoupled and thereby protected from additional, disruptive input and that subsequent α power decrease together with increased connectedness of sensorimotor areas facilitates successful facial affect recognition.

  1. The mysterious noh mask: contribution of multiple facial parts to the recognition of emotional expressions.

    Directory of Open Access Journals (Sweden)

    Hiromitsu Miyata

    Full Text Available BACKGROUND: A Noh mask worn by expert actors when performing on a Japanese traditional Noh drama is suggested to convey countless different facial expressions according to different angles of head/body orientation. The present study addressed the question of how different facial parts of a Noh mask, including the eyebrows, the eyes, and the mouth, may contribute to different emotional expressions. Both experimental situations of active creation and passive recognition of emotional facial expressions were introduced. METHODOLOGY/PRINCIPAL FINDINGS: In Experiment 1, participants either created happy or sad facial expressions, or imitated a face that looked up or down, by actively changing each facial part of a Noh mask image presented on a computer screen. For an upward tilted mask, the eyebrows and the mouth shared common features with sad expressions, whereas the eyes with happy expressions. This contingency tended to be reversed for a downward tilted mask. Experiment 2 further examined which facial parts of a Noh mask are crucial in determining emotional expressions. Participants were exposed to the synthesized Noh mask images with different facial parts expressing different emotions. Results clearly revealed that participants primarily used the shape of the mouth in judging emotions. The facial images having the mouth of an upward/downward tilted Noh mask strongly tended to be evaluated as sad/happy, respectively. CONCLUSIONS/SIGNIFICANCE: The results suggest that Noh masks express chimeric emotional patterns, with different facial parts conveying different emotions This appears consistent with the principles of Noh which highly appreciate subtle and composite emotional expressions, as well as with the mysterious facial expressions observed in Western art. It was further demonstrated that the mouth serves as a diagnostic feature in characterizing the emotional expressions. This indicates the superiority of biologically-driven factors over the

  2. Maximized Posteriori Attributes Selection from Facial Salient Landmarks for Face Recognition

    CERN Document Server

    Gupta, Phalguni; Sing, Jamuna Kanta; Tistarelli, Massimo

    2010-01-01

    This paper presents a robust and dynamic face recognition technique based on the extraction and matching of devised probabilistic graphs drawn on SIFT features related to independent face areas. The face matching strategy is based on matching individual salient facial graph characterized by SIFT features as connected to facial landmarks such as the eyes and the mouth. In order to reduce the face matching errors, the Dempster-Shafer decision theory is applied to fuse the individual matching scores obtained from each pair of salient facial features. The proposed algorithm is evaluated with the ORL and the IITK face databases. The experimental results demonstrate the effectiveness and potential of the proposed face recognition technique also in case of partially occluded faces.

  3. Psychometric Testing of the Gordon Facial Muscle Weakness Assessment Tool

    Science.gov (United States)

    Gordon, Shirley C.; Blum, Cynthia Ann; Parcells, Dax Andrew

    2010-01-01

    School nurses may be the first health professionals to assess the onset of facial paralysis/muscle weakness in school-age children. The purpose of this study was to test the psychometric properties of the Gordon Facial Muscle Weakness Assessment Tool (GFMWT) developed by Gordon. Data were collected in two phases. In Phase 1, 4 content experts…

  4. 3D facial expression recognition based on histograms of surface differential quantities

    KAUST Repository

    Li, Huibin

    2011-01-01

    3D face models accurately capture facial surfaces, making it possible for precise description of facial activities. In this paper, we present a novel mesh-based method for 3D facial expression recognition using two local shape descriptors. To characterize shape information of the local neighborhood of facial landmarks, we calculate the weighted statistical distributions of surface differential quantities, including histogram of mesh gradient (HoG) and histogram of shape index (HoS). Normal cycle theory based curvature estimation method is employed on 3D face models along with the common cubic fitting curvature estimation method for the purpose of comparison. Based on the basic fact that different expressions involve different local shape deformations, the SVM classifier with both linear and RBF kernels outperforms the state of the art results on the subset of the BU-3DFE database with the same experimental setting. © 2011 Springer-Verlag.

  5. Psychopathic traits in adolescents and recognition of emotion in facial expressions

    Directory of Open Access Journals (Sweden)

    Silvio José Lemos Vasconcellos

    2014-12-01

    Full Text Available Recent studies have investigated the ability of adult psychopaths and children with psychopathy traits to identify specific facial expressions of emotion. Conclusive results have not yet been found regarding whether psychopathic traits are associated with a specific deficit in the ability of identifying negative emotions such as fear and sadness. This study compared 20 adolescents with psychopathic traits and 21 adolescents without these traits in terms of their ability to recognize facial expressions of emotion using facial stimuli presented during 200 milliseconds, 500 milliseconds, and 1 second expositions. Analyses indicated significant differences between the two groups' performances only for fear and when displayed for 200 ms. This finding is consistent with findings from other studies in the field and suggests that controlling the duration of exposure to affective stimuli in future studies may help to clarify the mechanisms underlying the facial affect recognition deficits of individuals with psychopathic traits.

  6. Facial Affect Recognition Training Through Telepractice: Two Case Studies of Individuals with Chronic Traumatic Brain Injury

    Directory of Open Access Journals (Sweden)

    John Williamson

    2015-07-01

    Full Text Available The use of a modified Facial Affect Recognition (FAR training to identify emotions was investigated with two case studies of adults with moderate to severe chronic (> five years traumatic brain injury (TBI.  The modified FAR training was administered via telepractice to target social communication skills.  Therapy consisted of identifying emotions through static facial expressions, personally reflecting on those emotions, and identifying sarcasm and emotions within social stories and role-play.  Pre- and post-therapy measures included static facial photos to identify emotion and the Prutting and Kirchner Pragmatic Protocol for social communication.  Both participants with chronic TBI showed gains on identifying facial emotions on the static photos.               

  7. Reduced Reliance on Optimal Facial Information for Identity Recognition in Autism Spectrum Disorder

    Science.gov (United States)

    Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.

    2013-01-01

    Previous research into face processing in autism spectrum disorder (ASD) has revealed atypical biases toward particular facial information during identity recognition. Specifically, a focus on features (or high spatial frequencies [HSFs]) has been reported for both face and nonface processing in ASD. The current study investigated the development…

  8. Does Facial Expression Recognition Provide a Toehold for the Development of Emotion Understanding?

    Science.gov (United States)

    Strand, Paul S.; Downs, Andrew; Barbosa-Leiker, Celestina

    2016-01-01

    The authors explored predictions from basic emotion theory (BET) that facial emotion expression recognition skills are insular with respect to their own development, and yet foundational to the development of emotional perspective-taking skills. Participants included 417 preschool children for whom estimates of these 2 emotion understanding…

  9. Externalizing and Internalizing Symptoms Moderate Longitudinal Patterns of Facial Emotion Recognition in Autism Spectrum Disorder

    Science.gov (United States)

    Rosen, Tamara E.; Lerner, Matthew D.

    2016-01-01

    Facial emotion recognition (FER) is thought to be a key deficit domain in autism spectrum disorder (ASD). However, the extant literature is based solely on cross-sectional studies; thus, little is known about even short-term intra-individual dynamics of FER in ASD over time. The present study sought to examine trajectories of FER in ASD youth over…

  10. Facial Expression Recognition: Can Preschoolers with Cochlear Implants and Hearing Aids Catch It?

    Science.gov (United States)

    Wang, Yifang; Su, Yanjie; Fang, Ping; Zhou, Qingxia

    2011-01-01

    Tager-Flusberg and Sullivan (2000) presented a cognitive model of theory of mind (ToM), in which they thought ToM included two components--a social-perceptual component and a social-cognitive component. Facial expression recognition (FER) is an ability tapping the social-perceptual component. Previous findings suggested that normal hearing…

  11. Face Processing and Facial Emotion Recognition in Adults with Down Syndrome

    Science.gov (United States)

    Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial

    2008-01-01

    Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…

  12. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    Science.gov (United States)

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…

  13. Static and dynamic 3D facial expression recognition: A comprehensive survey

    NARCIS (Netherlands)

    Sandbach, G.; Zafeiriou, S.; Pantic, Maja; Yin, Lijun

    2012-01-01

    Automatic facial expression recognition constitutes an active research field due to the latest advances in computing technology that make the user's experience a clear priority. The majority of work conducted in this area involves 2D imagery, despite the problems this presents due to inherent pose a

  14. Static and dynamic 3D facial expression recognition: A comprehensive survey

    NARCIS (Netherlands)

    Sandbach, G.; Zafeiriou, S.; Pantic, Maja; Yin, Lijun

    2012-01-01

    Automatic facial expression recognition constitutes an active research field due to the latest advances in computing technology that make the user's experience a clear priority. The majority of work conducted in this area involves 2D imagery, despite the problems this presents due to inherent pose

  15. Facial Expression Recognition: Can Preschoolers with Cochlear Implants and Hearing Aids Catch It?

    Science.gov (United States)

    Wang, Yifang; Su, Yanjie; Fang, Ping; Zhou, Qingxia

    2011-01-01

    Tager-Flusberg and Sullivan (2000) presented a cognitive model of theory of mind (ToM), in which they thought ToM included two components--a social-perceptual component and a social-cognitive component. Facial expression recognition (FER) is an ability tapping the social-perceptual component. Previous findings suggested that normal hearing…

  16. Does Facial Expression Recognition Provide a Toehold for the Development of Emotion Understanding?

    Science.gov (United States)

    Strand, Paul S.; Downs, Andrew; Barbosa-Leiker, Celestina

    2016-01-01

    The authors explored predictions from basic emotion theory (BET) that facial emotion expression recognition skills are insular with respect to their own development, and yet foundational to the development of emotional perspective-taking skills. Participants included 417 preschool children for whom estimates of these 2 emotion understanding…

  17. Face Processing and Facial Emotion Recognition in Adults with Down Syndrome

    Science.gov (United States)

    Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial

    2008-01-01

    Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…

  18. Facial Emotion Recognition in Children with High Functioning Autism and Children with Social Phobia

    Science.gov (United States)

    Wong, Nina; Beidel, Deborah C.; Sarver, Dustin E.; Sims, Valerie

    2012-01-01

    Recognizing facial affect is essential for effective social functioning. This study examines emotion recognition abilities in children aged 7-13 years with High Functioning Autism (HFA = 19), Social Phobia (SP = 17), or typical development (TD = 21). Findings indicate that all children identified certain emotions more quickly (e.g., happy [less…

  19. Externalizing and Internalizing Symptoms Moderate Longitudinal Patterns of Facial Emotion Recognition in Autism Spectrum Disorder

    Science.gov (United States)

    Rosen, Tamara E.; Lerner, Matthew D.

    2016-01-01

    Facial emotion recognition (FER) is thought to be a key deficit domain in autism spectrum disorder (ASD). However, the extant literature is based solely on cross-sectional studies; thus, little is known about even short-term intra-individual dynamics of FER in ASD over time. The present study sought to examine trajectories of FER in ASD youth over…

  20. Reduced Reliance on Optimal Facial Information for Identity Recognition in Autism Spectrum Disorder

    Science.gov (United States)

    Leonard, Hayley C.; Annaz, Dagmara; Karmiloff-Smith, Annette; Johnson, Mark H.

    2013-01-01

    Previous research into face processing in autism spectrum disorder (ASD) has revealed atypical biases toward particular facial information during identity recognition. Specifically, a focus on features (or high spatial frequencies [HSFs]) has been reported for both face and nonface processing in ASD. The current study investigated the development…

  1. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    Science.gov (United States)

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…

  2. A novel dataset for real-life evaluation of facial expression recognition methodologies

    NARCIS (Netherlands)

    Siddiqi, Muhammad Hameed; Ali, Maqbool; Idris, Muhammad; Banos, Oresti; Lee, Sungyoung; Choo, Hyunseung

    2016-01-01

    One limitation seen among most of the previous methods is that they were evaluated under settings that are far from real-life scenarios. The reason is that the existing facial expression recognition (FER) datasets are mostly pose-based and assume a predefined setup. The expressions in these datasets

  3. Facial Emotion Recognition in Children with High Functioning Autism and Children with Social Phobia

    Science.gov (United States)

    Wong, Nina; Beidel, Deborah C.; Sarver, Dustin E.; Sims, Valerie

    2012-01-01

    Recognizing facial affect is essential for effective social functioning. This study examines emotion recognition abilities in children aged 7-13 years with High Functioning Autism (HFA = 19), Social Phobia (SP = 17), or typical development (TD = 21). Findings indicate that all children identified certain emotions more quickly (e.g., happy [less…

  4. The Effect of Gender and Age Differences on the Recognition of Emotions from Facial Expressions

    DEFF Research Database (Denmark)

    Schneevogt, Daniela; Paggio, Patrizia

    2016-01-01

    subjects. We conduct an emotion recognition task followed by two stereotype question- naires with different genders and age groups. While recent findings (Krems et al., 2015) suggest that women are biased to see anger in neutral facial expressions posed by females, in our sample both genders assign higher...

  5. Role of temporal processing stages by inferior temporal neurons in facial recognition

    Directory of Open Access Journals (Sweden)

    Yasuko eSugase-Miyamoto

    2011-06-01

    Full Text Available In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses.In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of

  6. Enhanced facial recognition for thermal imagery using polarimetric imaging.

    Science.gov (United States)

    Gurton, Kristan P; Yuffa, Alex J; Videen, Gorden W

    2014-07-01

    We present a series of long-wave-infrared (LWIR) polarimetric-based thermal images of facial profiles in which polarization-state information of the image-forming radiance is retained and displayed. The resultant polarimetric images show enhanced facial features, additional texture, and details that are not present in corresponding conventional thermal imagery. It has been generally thought that conventional thermal imagery (MidIR or LWIR) could not produce the detailed spatial information required for reliable human identification due to the so-called "ghosting" effect often seen in thermal imagery of human subjects. By using polarimetric information, we are able to extract subtle surface features of the human face, thus improving subject identification. Polarimetric image sets considered include the conventional thermal intensity image, S0, the two Stokes images, S1 and S2, and a Stokes image product called the degree-of-linear-polarization image.

  7. Composite Artistry Meets Facial Recognition Technology: Exploring the Use of Facial Recognition Technology to Identify Composite Images

    Science.gov (United States)

    2011-09-01

    composite drawings containing suspects depicted with hats had to be modified to remove the headwear . This headwear caused problems with the...program’s ability to distinguish a facial feature from the headwear . While this information was beneficial for the consumption of composite images for the

  8. Recognition of Facial Expressions in Individuals with Elevated Levels of Depressive Symptoms: An Eye-Movement Study

    Directory of Open Access Journals (Sweden)

    Lingdan Wu

    2012-01-01

    Full Text Available Previous studies consistently reported abnormal recognition of facial expressions in depression. However, it is still not clear whether this abnormality is due to an enhanced or impaired ability to recognize facial expressions, and what underlying cognitive systems are involved. The present study aimed to examine how individuals with elevated levels of depressive symptoms differ from controls on facial expression recognition and to assess attention and information processing using eye tracking. Forty participants (18 with elevated depressive symptoms were instructed to label facial expressions depicting one of seven emotions. Results showed that the high-depression group, in comparison with the low-depression group, recognized facial expressions faster and with comparable accuracy. Furthermore, the high-depression group demonstrated greater leftwards attention bias which has been argued to be an indicator of hyperactivation of right hemisphere during facial expression recognition.

  9. 孤独症儿童对人物面孔图表情识别特征及情绪归因特点%Characteristics of Chinese static facial expressions recognition and emotion attribution in children with autism

    Institute of Scientific and Technical Information of China (English)

    顾莉萍; 静进; 金宇; 陈强; 范方; 徐桂凤; 黄赛君

    2012-01-01

    [Objective] To evaluate the ability and characteristics of facial expression recognition and emotion attribution in children with autism. [Methods] The photos from Chinese Static facial expression photos were chosen for the test. 19 children with autism [age (9. Oil. 8)y,17 boys and 2 girls

  10. Local Illumination Normalization and Facial Feature Point Selection for Robust Face Recognition

    Directory of Open Access Journals (Sweden)

    Song HAN

    2013-03-01

    Full Text Available Face recognition systems must be robust to the variation of various factors such as facial expression, illumination, head pose and aging. Especially, the robustness against illumination variation is one of the most important problems to be solved for the practical use of face recognition systems. Gabor wavelet is widely used in face detection and recognition because it gives the possibility to simulate the function of human visual system. In this paper, we propose a method for extracting Gabor wavelet features which is stable under the variation of local illumination and show experiment results demonstrating its effectiveness.

  11. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    Science.gov (United States)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  12. Quality of life differences in patients with right- versus left-sided facial paralysis: Universal preference of right-sided human face recognition.

    Science.gov (United States)

    Ryu, Nam Gyu; Lim, Byung Woo; Cho, Jae Keun; Kim, Jin

    2016-09-01

    We investigated whether experiencing right- or left-sided facial paralysis would affect an individual's ability to recognize one side of the human face using hybrid hemi-facial photos by preliminary study. Further investigation looked at the relationship between facial recognition ability, stress, and quality of life. To investigate predominance of one side of the human face for face recognition, 100 normal participants (right-handed: n = 97, left-handed: n = 3, right brain dominance: n = 56, left brain dominance: n = 44) answered a questionnaire that included hybrid hemi-facial photos developed to determine decide superiority of one side for human face recognition. To determine differences of stress level and quality of life between individuals experiencing right- and left-sided facial paralysis, 100 patients (right side:50, left side:50, not including traumatic facial nerve paralysis) answered a questionnaire about facial disability index test and quality of life (SF-36 Korean version). Regardless of handedness or hemispheric dominance, the proportion of predominance of the right side in human face recognition was larger than the left side (71% versus 12%, neutral: 17%). Facial distress index of the patients with right-sided facial paralysis was lower than that of left-sided patients (68.8 ± 9.42 versus 76.4 ± 8.28), and the SF-36 scores of right-sided patients were lower than left-sided patients (119.07 ± 15.24 versus 123.25 ± 16.48, total score: 166). Universal preference for the right side in human face recognition showed worse psychological mood and social interaction in patients with right-side facial paralysis than left-sided paralysis. This information is helpful to clinicians in that psychological and social factors should be considered when treating patients with facial-paralysis. Copyright © 2016 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  13. Hybrid Feature Extraction-based Approach for Facial Parts Representation and Recognition

    Science.gov (United States)

    Rouabhia, C.; Tebbikh, H.

    2008-06-01

    Face recognition is a specialized image processing which has attracted a considerable attention in computer vision. In this article, we develop a new facial recognition system from video sequences images dedicated to person identification whose face is partly occulted. This system is based on a hybrid image feature extraction technique called ACPDL2D (Rouabhia et al. 2007), it combines two-dimensional principal component analysis and two-dimensional linear discriminant analysis with neural network. We performed the feature extraction task on the eyes and the nose images separately then a Multi-Layers Perceptron classifier is used. Compared to the whole face, the results of simulation are in favor of the facial parts in terms of memory capacity and recognition (99.41% for the eyes part, 98.16% for the nose part and 97.25 % for the whole face).

  14. Effects of Orientation on Recognition of Facial Affect

    Science.gov (United States)

    Cohen, M. M.; Mealey, J. B.; Hargens, Alan R. (Technical Monitor)

    1997-01-01

    The ability to discriminate facial features is often degraded when the orientation of the face and/or the observer is altered. Previous studies have shown that gross distortions of facial features can go unrecognized when the image of the face is inverted, as exemplified by the 'Margaret Thatcher' effect. This study examines how quickly erect and supine observers can distinguish between smiling and frowning faces that are presented at various orientations. The effects of orientation are of particular interest in space, where astronauts frequently view one another in orientations other than the upright. Sixteen observers viewed individual facial images of six people on a computer screen; on a given trial, the image was either smiling or frowning. Each image was viewed when it was erect and when it was rotated (rolled) by 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees and 270 degrees about the line of sight. The observers were required to respond as rapidly and accurately as possible to identify if the face presented was smiling or frowning. Measures of reaction time were obtained when the observers were both upright and supine. Analyses of variance revealed that mean reaction time, which increased with stimulus rotation (F=18.54, df 7/15, p (is less than) 0.001), was 22% longer when the faces were inverted than when they were erect, but that the orientation of the observer had no significant effect on reaction time (F=1.07, df 1/15, p (is greater than) .30). These data strongly suggest that the orientation of the image of a face on the observer's retina, but not its orientation with respect to gravity, is important in identifying the expression on the face.

  15. Recognition of the Cornelia de Lange syndrome phenotype with facial dysmorphology novel analysis.

    Science.gov (United States)

    Basel-Vanagaite, L; Wolf, L; Orin, M; Larizza, L; Gervasini, C; Krantz, I D; Deardoff, M A

    2016-05-01

    Facial analysis systems are becoming available to healthcare providers to aid in the recognition of dysmorphic phenotypes associated with a multitude of genetic syndromes. These technologies automatically detect facial points and extract various measurements from images to recognize dysmorphic features and evaluate similarities to known facial patterns (gestalts). To evaluate such systems' usefulness for supporting the clinical practice of healthcare professionals, the recognition accuracy of the Cornelia de Lange syndrome (CdLS) phenotype was examined with FDNA's automated facial dysmorphology novel analysis (FDNA) technology. In the first experiment, 2D facial images of CdLS patients with either an NIPBL or SMC1A gene mutation as well as non-CdLS patients which were assessed by dysmorphologists in a previous study were evaluated by the FDNA technology; the average detection rate of experts was 77% while the system's detection rate was 87%. In the second study, when a new set of NIPBL, SMC1A and non-CdLS patient photos was evaluated, the detection rate increased to 94%. The results from both studies indicated that the system's detection rate was comparable to that of dysmorphology experts. Therefore, utilizing such technologies may be a useful tool in a clinical setting. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. A Micro-GA Embedded PSO Feature Selection Approach to Intelligent Facial Emotion Recognition.

    Science.gov (United States)

    Mistry, Kamlesh; Zhang, Li; Neoh, Siew Chin; Lim, Chee Peng; Fielding, Ben

    2017-06-01

    This paper proposes a facial expression recognition system using evolutionary particle swarm optimization (PSO)-based feature optimization. The system first employs modified local binary patterns, which conduct horizontal and vertical neighborhood pixel comparison, to generate a discriminative initial facial representation. Then, a PSO variant embedded with the concept of a micro genetic algorithm (mGA), called mGA-embedded PSO, is proposed to perform feature optimization. It incorporates a nonreplaceable memory, a small-population secondary swarm, a new velocity updating strategy, a subdimension-based in-depth local facial feature search, and a cooperation of local exploitation and global exploration search mechanism to mitigate the premature convergence problem of conventional PSO. Multiple classifiers are used for recognizing seven facial expressions. Based on a comprehensive study using within- and cross-domain images from the extended Cohn Kanade and MMI benchmark databases, respectively, the empirical results indicate that our proposed system outperforms other state-of-the-art PSO variants, conventional PSO, classical GA, and other related facial expression recognition models reported in the literature by a significant margin.

  17. Surface Electromyography-Based Facial Expression Recognition in Bi-Polar Configuration

    Directory of Open Access Journals (Sweden)

    Mahyar Hamedi

    2011-01-01

    Full Text Available Problem statement: Facial expression recognition has been improved recently and it has become a significant issue in diagnostic and medical fields, particularly in the areas of assistive technology and rehabilitation. Apart from their usefulness, there are some problems in their applications like peripheral conditions, lightening, contrast and quality of video and images. Approach: Facial Action Coding System (FACS and some other methods based on images or videos were applied. This study proposed two methods for recognizing 8 different facial expressions such as natural (rest, happiness in three conditions, anger, rage, gesturing ‘a’ like in apple word and gesturing no by pulling up the eyebrows based on Three-channels in Bi-polar configuration by SEMG. Raw signals were processed in three main steps (filtration, feature extraction and active features selection sequentially. Processed data was fed into Support Vector Machine and Fuzzy C-Means classifiers for being classified into 8 facial expression groups. Results: 91.8 and 80.4% recognition ratio had been achieved for FCM and SVM respectively. Conclusion: The confirmed enough accuracy and power in this field of study and FCM showed its better ability and performance in comparison with SVM. It’s expected that in near future, new approaches in the frequency bandwidth of each facial gesture will provide better results.

  18. The recognition of facial expressions of emotion in Alzheimer's disease: a review of findings.

    Science.gov (United States)

    McLellan, Tracey; Johnston, Lucy; Dalrymple-Alford, John; Porter, Richard

    2008-10-01

    To provide a selective review of the literature on the recognition of facial expressions of emotion in Alzheimer's disease (AD), to evaluate whether these patients show variation in their ability to recognise different emotions and whether any such impairments are instead because of a general decline in cognition. A narrative review based on relevant articles identified from PubMed and PsycInfo searches from 1987 to 2007 using keywords 'Alzheimer's', 'facial expression recognition', 'dementia' and 'emotion processing'. Although the literature is as yet limited, with several methodological inconsistencies, AD patients show poorer recognition of facial expressions, with particular difficulty with sad expressions. It is unclear whether poorer performance reflects the general cognitive decline and/or verbal or spatial deficits associated with AD or whether the deficits reflect specific neuropathology. This under-represented field of study may help to extend our understanding of social functioning in AD. Future work requires more detailed analyses of ancillary cognitive measures, more ecologically valid facial displays of emotion and a reference situation that more closely approximates an actual social interaction.

  19. Impaired recognition of prosody and subtle emotional facial expressions in Parkinson's disease.

    Science.gov (United States)

    Buxton, Sharon L; MacDonald, Lorraine; Tippett, Lynette J

    2013-04-01

    Accurately recognizing the emotional states of others is crucial for successful social interactions and social relationships. Individuals with Parkinson's disease (PD) have shown deficits in emotional recognition abilities although findings have been inconsistent. This study examined recognition of emotions from prosody and from facial emotional expressions with three levels of subtlety, in 30 individuals with PD (without dementia) and 30 control participants. The PD group were impaired on the prosody task, with no differential impairments in specific emotions. PD participants were also impaired at recognizing facial expressions of emotion, with a significant association between how well they could recognize emotions in the two modalities, even after controlling for disease severity. When recognizing facial expressions, the PD group had no difficulty identifying prototypical Ekman and Friesen (1976) emotional faces, but were poorer than controls at recognizing the moderate and difficult levels of subtle expressions. They were differentially impaired at recognizing moderately subtle expressions of disgust and sad expressions at the difficult level. Notably, however, they were impaired at recognizing happy expressions at both levels of subtlety. Furthermore how well PD participants identified happy expressions conveyed by either face or voice was strongly related to accuracy in the other modality. This suggests dysfunction of overlapping components of the circuitry processing happy expressions in PD. This study demonstrates the usefulness of including subtle expressions of emotion, likely to be encountered in everyday life, when assessing recognition of facial expressions.

  20. LWIR polarimetry for enhanced facial recognition in thermal imagery

    Science.gov (United States)

    Gurton, Kristan P.; Yuffa, Alex J.; Videen, Gorden

    2014-05-01

    We present a series of long-wave-infrared (LWIR) polarimetric-based thermal images of facial profiles in which polarization-state information of the image forming radiance is retained and displayed. The resultant polarimetric images show enhanced facial features, additional texture, and details that are not present in the corresponding conventional thermal imagery. It has been generally thought that conventional thermal imagery (MidiR or LWIR) could not produce the detailed spatial information required for reliable human identification due to the so-called "ghosting" effect often seen in thermal imagery of human subjects. By using polarimetric information, we are able to extract subtle surface features of the human face, thus improving subject identification. The considered polarimetric image sets include the conventional thermal intensity image, S0 , the two Stokes images, S1 and S2, and a Stokes image product called the degree-of-linear-polarization (DoLP) image. Finally, Stokes imagery is combined with Fresnel relations to extract additional 3D surface information.

  1. Human facial neural activities and gesture recognition for machine-interfacing applications.

    Science.gov (United States)

    Hamedi, M; Salleh, Sh-Hussain; Tan, T S; Ismail, K; Ali, J; Dee-Uam, C; Pavaganun, C; Yupapin, P P

    2011-01-01

    The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human-machine interface (HMI) technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG)-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2-11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy >90% of chosen combinations proved their ability to be used as command controllers.

  2. Newborns' Face Recognition: Role of Inner and Outer Facial Features

    Science.gov (United States)

    Turati, Chiara; Macchi Cassia, Viola; Simion, Francesca; Leo, Irene

    2006-01-01

    Existing data indicate that newborns are able to recognize individual faces, but little is known about what perceptual cues drive this ability. The current study showed that either the inner or outer features of the face can act as sufficient cues for newborns' face recognition (Experiment 1), but the outer part of the face enjoys an advantage…

  3. Recognition of facial expressions by alcoholic patients: a systematic literature review.

    Science.gov (United States)

    Donadon, Mariana Fortunata; Osório, Flávia de Lima

    2014-01-01

    Alcohol abuse and dependence can cause a wide variety of cognitive, psychomotor, and visual-spatial deficits. It is questionable whether this condition is associated with impairments in the recognition of affective and/or emotional information. Such impairments may promote deficits in social cognition and, consequently, in the adaptation and interaction of alcohol abusers with their social environment. The aim of this systematic review was to systematize the literature on alcoholics' recognition of basic facial expressions in terms of the following outcome variables: accuracy, emotional intensity, and latency time. A systematic literature search in the PsycINFO, PubMed, and SciELO electronic databases, with no restrictions regarding publication year, was employed as the study methodology. The findings of some studies indicate that alcoholics have greater impairment in facial expression recognition tasks, while others could not differentiate the clinical group from controls. However, there was a trend toward greater deficits in alcoholics. Alcoholics displayed less accuracy in recognition of sadness and disgust and required greater emotional intensity to judge facial expressions corresponding to fear and anger. The current study was only able to identify trends in the chosen outcome variables. Future studies that aim to provide more precise evidence for the potential influence of alcohol on social cognition are needed.

  4. Influence of gender in the recognition of basic facial expressions: A critical literature review.

    Science.gov (United States)

    Forni-Santos, Larissa; Osório, Flávia L

    2015-09-22

    To conduct a systematic literature review about the influence of gender on the recognition of facial expressions of six basic emotions. We made a systematic search with the search terms (face OR facial) AND (processing OR recognition OR perception) AND (emotional OR emotion) AND (gender or sex) in PubMed, PsycINFO, LILACS, and SciELO electronic databases for articles assessing outcomes related to response accuracy and latency and emotional intensity. The articles selection was performed according to parameters set by COCHRANE. The reference lists of the articles found through the database search were checked for additional references of interest. In respect to accuracy, women tend to perform better than men when all emotions are considered as a set. Regarding specific emotions, there seems to be no gender-related differences in the recognition of happiness, whereas results are quite heterogeneous in respect to the remaining emotions, especially sadness, anger, and disgust. Fewer articles dealt with the parameters of response latency and emotional intensity, which hinders the generalization of their findings, especially in the face of their methodological differences. The analysis of the studies conducted to date do not allow for definite conclusions concerning the role of the observer's gender in the recognition of facial emotion, mostly because of the absence of standardized methods of investigation.

  5. Neurocognition and symptoms identify links between facial recognition and emotion processing in schizophrenia: meta-analytic findings.

    Science.gov (United States)

    Ventura, Joseph; Wood, Rachel C; Jimenez, Amy M; Hellemann, Gerhard S

    2013-12-01

    In schizophrenia patients, one of the most commonly studied deficits of social cognition is emotion processing (EP), which has documented links to facial recognition (FR). But, how are deficits in facial recognition linked to emotion processing deficits? Can neurocognitive and symptom correlates of FR and EP help differentiate the unique contribution of FR to the domain of social cognition? A meta-analysis of 102 studies (combined n=4826) in schizophrenia patients was conducted to determine the magnitude and pattern of relationships between facial recognition, emotion processing, neurocognition, and type of symptom. Meta-analytic results indicated that facial recognition and emotion processing are strongly interrelated (r=.51). In addition, the relationship between FR and EP through voice prosody (r=.58) is as strong as the relationship between FR and EP based on facial stimuli (r=.53). Further, the relationship between emotion recognition, neurocognition, and symptoms is independent of the emotion processing modality - facial stimuli and voice prosody. The association between FR and EP that occurs through voice prosody suggests that FR is a fundamental cognitive process. The observed links between FR and EP might be due to bottom-up associations between neurocognition and EP, and not simply because most emotion recognition tasks use visual facial stimuli. In addition, links with symptoms, especially negative symptoms and disorganization, suggest possible symptom mechanisms that contribute to FR and EP deficits. © 2013 Elsevier B.V. All rights reserved.

  6. Recognition of facial expressions by alcoholic patients: a systematic literature review

    Directory of Open Access Journals (Sweden)

    Donadon MF

    2014-09-01

    Full Text Available Mariana Fortunata Donadon,1,2 Flávia de Lima Osório1,3,41Department of Neurosciences and Behavior, Medical School of Ribeirão Preto, University of São Paulo, 2Coordination for the Improvement of Higher Level Personnel-CAPS, 3Technology Institute for Translational Medicine, Ribeirão Preto, São Paulo, Brazil; 4Agency of São Paulo Research Foundation, São Paulo, BrazilBackground: Alcohol abuse and dependence can cause a wide variety of cognitive, psychomotor, and visual-spatial deficits. It is questionable whether this condition is associated with impairments in the recognition of affective and/or emotional information. Such impairments may promote deficits in social cognition and, consequently, in the adaptation and interaction of alcohol abusers with their social environment. The aim of this systematic review was to systematize the literature on alcoholics’ recognition of basic facial expressions in terms of the following outcome variables: accuracy, emotional intensity, and latency time.Methods: A systematic literature search in the PsycINFO, PubMed, and SciELO electronic databases, with no restrictions regarding publication year, was employed as the study methodology.Results: The findings of some studies indicate that alcoholics have greater impairment in facial expression recognition tasks, while others could not differentiate the clinical group from controls. However, there was a trend toward greater deficits in alcoholics. Alcoholics displayed less accuracy in recognition of sadness and disgust and required greater emotional intensity to judge facial expressions corresponding to fear and anger.Conclusion: The current study was only able to identify trends in the chosen outcome variables. Future studies that aim to provide more precise evidence for the potential influence of alcohol on social cognition are needed.Keywords: alcoholism, face, emotional recognition, facial expression, systematic review

  7. Recognition of Facial Expression Using Eigenvector Based Distributed Features and Euclidean Distance Based Decision Making Technique

    Directory of Open Access Journals (Sweden)

    Jeemoni Kalita

    2013-03-01

    Full Text Available In this paper, an Eigenvector based system has been presented to recognize facial expressions from digital facial images. In the approach, firstly the images were acquired and cropping of five significant portions from the image was performed to extract and store the Eigenvectors specific to the expressions. The Eigenvectors for the test images were also computed, and finally the input facial image was recognized when similarity was obtained by calculating the minimum Euclidean distance between the test image and the different expressions.

  8. Close Range Photogrammetry and Neural Network for Facial Recognition

    Directory of Open Access Journals (Sweden)

    Rami Al-Ruzouq

    2012-01-01

    Full Text Available Recently, there has been an increasing interest in utilizing imagery in different fields such as archaeology, architecture, mechanical inspection and biometric identifiers where face recognition considered as one of the most important physiological characteristics that is related to the shape and geometry of the faces and used for identification and verification of a person's identity. In this study, close range photogrammetry with overlapping photographs were used to create a three dimensional model of human face where coordinates of selected object points were exatrcted and used to caculate five different geometric quantities that been used as biometric authentication for uniquely recognizing humans. Then , the probabilistic neural networks, with their remarkable ability to derive meaning from complicated or imprecise data, utilize the extracted geometric quantities to find patterns and detect trends that are too complex to be noticed by either humans or other computer techniques. Quantifiable dimensions that based on geometric attributes rather than radiometric characteristics has been successfully extracted using close range photogrammetry. the Probabilistic Neural Network (PNN as a kind from radial basis network group has been used to specify a geometrics parameters for face recognition where the designed recognition method is not effected by face gesture or color and has lower cost compared with other techniques. This method is reliable and flexible with respect to the level of detail that describe the human surface. Experimental results using real data proved the feasibility and the quality of the suggested approach.

  9. A Classifier Model based on the Features Quantitative Analysis for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Amir Jamshidnezhad

    2011-01-01

    Full Text Available In recent decades computer technology has considerable developed in use of intelligent systems for classification. The development of HCI systems is highly depended on accurate understanding of emotions. However, facial expressions are difficult to classify by a mathematical models because of natural quality. In this paper, quantitative analysis is used in order to find the most effective features movements between the selected facial feature points. Therefore, the features are extracted not only based on the psychological studies, but also based on the quantitative methods to arise the accuracy of recognitions. Also in this model, fuzzy logic and genetic algorithm are used to classify facial expressions. Genetic algorithm is an exclusive attribute of proposed model which is used for tuning membership functions and increasing the accuracy.

  10. The role of spatial frequency information in the recognition of facial expressions of pain.

    Science.gov (United States)

    Wang, Shan; Eccleston, Christopher; Keogh, Edmund

    2015-09-01

    Being able to detect pain from facial expressions is critical for pain communication. Alongside identifying the specific facial codes used in pain recognition, there are other types of more basic perceptual features, such as spatial frequency (SF), which refers to the amount of detail in a visual display. Low SF carries coarse information, which can be seen from a distance, and high SF carries fine-detailed information that can only be perceived when viewed close up. As this type of basic information has not been considered in the recognition of pain, we therefore investigated the role of low-SF and high-SF information in the decoding of facial expressions of pain. Sixty-four pain-free adults completed 2 independent tasks: a multiple expression identification task of pain and core emotional expressions and a dual expression "either-or" task (pain vs fear, pain vs happiness). Although both low-SF and high-SF information make the recognition of pain expressions possible, low-SF information seemed to play a more prominent role. This general low-SF bias would seem an advantageous way of potential threat detection, as facial displays will be degraded if viewed from a distance or in peripheral vision. One exception was found, however, in the "pain-fear" task, where responses were not affected by SF type. Together, this not only indicates a flexible role for SF information that depends on task parameters (goal context) but also suggests that in challenging visual conditions, we perceive an overall affective quality of pain expressions rather than detailed facial features.

  11. Emotional Processing, Recognition, Empathy and Evoked Facial Expression in Eating Disorders: An Experimental Study to Map Deficits in Social Cognition

    National Research Council Canada - National Science Library

    Cardi, Valentina; Corfield, Freya; Leppanen, Jenni; Rhind, Charlotte; Deriziotis, Stephanie; Hadjimichalis, Alexandra; Hibbs, Rebecca; Micali, Nadia; Treasure, Janet

    2015-01-01

    .... The aim of this study is to examine distinct processes of social-cognition in this patient group, including attentional processing and recognition, empathic reaction and evoked facial expression...

  12. The Effect of Gender and Age Differences on the Recognition of Emotions from Facial Expressions

    DEFF Research Database (Denmark)

    Schneevogt, Daniela; Paggio, Patrizia

    2016-01-01

    Recent studies have demonstrated gender and cultural differences in the recognition of emotions in facial expressions. However, most studies were conducted on American subjects. In this pa- per, we explore the generalizability of several findings to a non-American culture in the form of Danish...... subjects. We conduct an emotion recognition task followed by two stereotype question- naires with different genders and age groups. While recent findings (Krems et al., 2015) suggest that women are biased to see anger in neutral facial expressions posed by females, in our sample both genders assign higher...... ratings of anger to all emotions expressed by females. Furthermore, we demonstrate an effect of gender on the fear-surprise-confusion observed by Tomkins and McCarter (1964); females overpredict fear, while males overpredict surprise....

  13. Batch metadata assignment to archival photograph collections using facial recognition software

    Directory of Open Access Journals (Sweden)

    Kyle Banerjee

    2013-07-01

    Full Text Available Useful metadata is essential to giving individual meaning and value within the context of a greater image collection as well as making them more discoverable. However, often little information is available about the photos themselves, so adding consistent metadata to large collections of digital and digitized photographs is a time consuming process requiring highly experienced staff. By using facial recognition software, staff can identify individuals more quickly and reliably. Knowledge of individuals in photos helps staff determine when and where photos are taken and also improves understanding of the subject matter. This article demonstrates simple techniques for using facial recognition software and command line tools to assign, modify, and read metadata for large archival photograph collections.

  14. Fusion-based approach for long-range night-time facial recognition

    Science.gov (United States)

    Martin, Robert B.; Sluch, Mikhail; Kafka, Kristopher M.; Dolby, Andrew; Ice, Robert V.; Lemoff, Brian E.

    2014-06-01

    Long range identification using facial recognition is being pursued as a valuable surveillance tool. The capability to perform this task covertly and in total darkness greatly enhances the operators' ability to maintain a large distance between themselves and a possible hostile target. An active-SWIR video imaging system has been developed to produce high-quality long-range night/day facial imagery for this purpose. Most facial recognition techniques match a single input probe image against a gallery of possible match candidates. When resolution, wavelength, and uncontrolled conditions reduce the accuracy of single-image matching, multiple probe images of the same subject can be matched to the watch-list and the results fused to increase accuracy. If multiple probe images are acquired from video over a short period of time, the high correlation between the images tends to produce similar matching results, which should reduce the benefit of the fusion. In contrast, fusing matching results from multiple images acquired over a longer period of time, where the images show more variability, should produce a more accurate result. In general, image variables could include pose angle, field-of-view, lighting condition, facial expression, target to sensor distance, contrast, and image background. Long-range short wave infrared (SWIR) video was used to generate probe image datasets containing different levels of variability. Face matching results for each image in each dataset were fused, and the results compared.

  15. Facial, vocal and musical emotion recognition is altered in paranoid schizophrenic patients.

    Science.gov (United States)

    Weisgerber, Anne; Vermeulen, Nicolas; Peretz, Isabelle; Samson, Séverine; Philippot, Pierre; Maurage, Pierre; De Graeuwe D'Aoust, Catherine; De Jaegere, Aline; Delatte, Benoît; Gillain, Benoît; De Longueville, Xavier; Constant, Eric

    2015-09-30

    Disturbed processing of emotional faces and voices is typically observed in schizophrenia. This deficit leads to impaired social cognition and interactions. In this study, we investigated whether impaired processing of emotions also affects musical stimuli, which are widely present in daily life and known for their emotional impact. Thirty schizophrenic patients and 30 matched healthy controls evaluated the emotional content of musical, vocal and facial stimuli. Schizophrenic patients are less accurate than healthy controls in recognizing emotion in music, voices and faces. Our results confirm impaired recognition of emotion in voice and face stimuli in schizophrenic patients and extend this observation to the recognition of emotion in musical stimuli.

  16. 人脸表情识别综述%Summary of facial expression recognition

    Institute of Scientific and Technical Information of China (English)

    王大伟; 周军; 梅红岩; 张素娥

    2014-01-01

    人脸表情识别作为情感计算的一个研究方向,构成了情感理解的基础,是实现人机交互智能的前提。人脸表情的极度细腻化消耗了大量的计算时间,影响了人机交互的时效性和体验感,所以人脸表情特征提取成为人脸表情识别的重要研究课题。总结了国内外近五年的人脸表情识别的稳固框架和新进展,主要针对人脸表情特征提取和表情分类方法进行了归纳,详细介绍了这两方面的主要算法及改进,并分析比较了各种算法的优势与不足。通过对国内外人脸表情识别应用中实际问题进行研究,给出了人脸表情识别方面仍然存在的挑战及不足。%As a research direction of the affective computing, facial expression recognition which constitutes the basis of emotion understanding, is the premise to complete human-computer interaction intelligent. Facial expression is so exqui-site that it consumes a large amount of computation time and influences the timeliness and experience feeling from human-computer interaction intelligent. Consequently, facial feature extraction has become an important research topic in the area of facial expression recognition. The progress and stable framework for facial expression recognition in recent five years are generalized, a serial of algorithms applied in feature extraction and expression classification are summarized, Then, the main algorithms and their improvement are described in detail, and advantages and disadvantages among in different algorithms are analyzed and compared. In the same time, comparison with the other algorithms is also introduced. The challenges and shortcomings are pointed out by the research of practical problems in facial expression recognition application.

  17. Reconhecimento de expressões faciais de emoções: padronização de imagens do Teste de Conhecimento Emocional = Recognition of facial expressions of emotions: standardization of pictures for Emotion Matching Tasks

    Directory of Open Access Journals (Sweden)

    Andrade, Nara Côrtes

    2013-01-01

    Full Text Available As emoções possuem papel fundamental na socialização humana e as expressões faciais são uma importante via para a sua comunicação. O objetivo deste estudo foi obter dados de padronização para população brasileira das 83 fotografias de expressões faciais de emoções básicas que compõem o Teste de Conhecimento Emocional (EMT e compará-los com os dados da amostra estadunidense, analisando semelhanças e diferenças culturais. Participaram 80 estudantes universitários da cidade de Salvador (Bahia, Brasil. Cada fotografia, apresentada sequencialmente através de projeção visual, foi julgada em termos de qual emoção melhor correspondia à expressão facial. Os resultados mostram bom nível de concordância no julgamento das imagens. As amostras brasileira e norte-americana julgaram 95,2% das imagens como expressando a mesma emoção. O presente estudo corrobora a hipótese de universalidade das emoções básicas, fornece imagens padronizadas para uso do EMT na população brasileira e discute diferenças culturais quanto ao julgamento da intensidade das expressões emocionais

  18. Verbal bias in recognition of facial emotions in children with Asperger syndrome.

    Science.gov (United States)

    Grossman, J B; Klin, A; Carter, A S; Volkmar, F R

    2000-03-01

    Thirteen children and adolescents with diagnoses of Asperger syndrome (AS) were matched with 13 nonautistic control children on chronological age and verbal IQ. They were tested on their ability to recognize simple facial emotions, as well as facial emotions paired with matching, mismatching, or irrelevant verbal labels. There were no differences between the groups at recognizing simple emotions but the Asperger group performed significantly worse than the control group at recognizing emotions when faces were paired with mismatching words (but not with matching or irrelevant words). The results suggest that there are qualitative differences from nonclinical populations in how children with AS process facial expressions. When presented with a more demanding affective processing task, individuals with AS showed a bias towards visual-verbal over visual-affective information (i.e., words over faces). Thus, children with AS may be utilizing compensatory strategies, such as verbal mediation, to process facial expressions of emotion.

  19. Violent video game players and non-players differ on facial emotion recognition.

    Science.gov (United States)

    Diaz, Ruth L; Wong, Ulric; Hodgins, David C; Chiu, Carina G; Goghari, Vina M

    2016-01-01

    Violent video game playing has been associated with both positive and negative effects on cognition. We examined whether playing two or more hours of violent video games a day, compared to not playing video games, was associated with a different pattern of recognition of five facial emotions, while controlling for general perceptual and cognitive differences that might also occur. Undergraduate students were categorized as violent video game players (n = 83) or non-gamers (n = 69) and completed a facial recognition task, consisting of an emotion recognition condition and a control condition of gender recognition. Additionally, participants completed questionnaires assessing their video game and media consumption, aggression, and mood. Violent video game players recognized fearful faces both more accurately and quickly and disgusted faces less accurately than non-gamers. Desensitization to violence, constant exposure to fear and anxiety during game playing, and the habituation to unpleasant stimuli, are possible mechanisms that could explain these results. Future research should evaluate the effects of violent video game playing on emotion processing and social cognition more broadly.

  20. Human facial neural activities and gesture recognition for machine-interfacing applications

    Directory of Open Access Journals (Sweden)

    Hamedi M

    2011-12-01

    Full Text Available M Hamedi1, Sh-Hussain Salleh2, TS Tan2, K Ismail2, J Ali3, C Dee-Uam4, C Pavaganun4, PP Yupapin51Faculty of Biomedical and Health Science Engineering, Department of Biomedical Instrumentation and Signal Processing, University of Technology Malaysia, Skudai, 2Centre for Biomedical Engineering Transportation Research Alliance, 3Institute of Advanced Photonics Science, Nanotechnology Research Alliance, University of Technology Malaysia (UTM, Johor Bahru, Malaysia; 4College of Innovative Management, Valaya Alongkorn Rajabhat University, Pathum Thani, 5Nanoscale Science and Engineering Research Alliance (N'SERA, Advanced Research Center for Photonics, Faculty of Science, King Mongkut's Institute of Technology Ladkrabang, Bangkok, ThailandAbstract: The authors present a new method of recognizing different human facial gestures through their neural activities and muscle movements, which can be used in machine-interfacing applications. Human–machine interface (HMI technology utilizes human neural activities as input controllers for the machine. Recently, much work has been done on the specific application of facial electromyography (EMG-based HMI, which have used limited and fixed numbers of facial gestures. In this work, a multipurpose interface is suggested that can support 2–11 control commands that can be applied to various HMI systems. The significance of this work is finding the most accurate facial gestures for any application with a maximum of eleven control commands. Eleven facial gesture EMGs are recorded from ten volunteers. Detected EMGs are passed through a band-pass filter and root mean square features are extracted. Various combinations of gestures with a different number of gestures in each group are made from the existing facial gestures. Finally, all combinations are trained and classified by a Fuzzy c-means classifier. In conclusion, combinations with the highest recognition accuracy in each group are chosen. An average accuracy

  1. Reduced Recognition of Dynamic Facial Emotional Expressions and Emotion-Specific Response Bias in Children with an Autism Spectrum Disorder

    Science.gov (United States)

    Evers, Kris; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2015-01-01

    Emotion labelling was evaluated in two matched samples of 6-14-year old children with and without an autism spectrum disorder (ASD; N = 45 and N = 50, resp.), using six dynamic facial expressions. The Emotion Recognition Task proved to be valuable demonstrating subtle emotion recognition difficulties in ASD, as we showed a general poorer emotion…

  2. The Effect of Repeated Ketamine Infusion Over Facial Emotion Recognition in Treatment-Resistant Depression: A Preliminary Report.

    Science.gov (United States)

    Shiroma, Paulo R; Albott, C Sophia; Johns, Brian; Thuras, Paul; Wels, Joseph; Lim, Kelvin O

    2015-01-01

    In contrast to improvement in emotion recognition bias by traditional antidepressants, the authors report preliminary findings that changes in facial emotion recognition are not associated with response of depressive symptoms after repeated ketamine infusions or relapse during follow-up in treatment-resistant depression.

  3. Reduced Recognition of Dynamic Facial Emotional Expressions and Emotion-Specific Response Bias in Children with an Autism Spectrum Disorder

    Science.gov (United States)

    Evers, Kris; Steyaert, Jean; Noens, Ilse; Wagemans, Johan

    2015-01-01

    Emotion labelling was evaluated in two matched samples of 6-14-year old children with and without an autism spectrum disorder (ASD; N = 45 and N = 50, resp.), using six dynamic facial expressions. The Emotion Recognition Task proved to be valuable demonstrating subtle emotion recognition difficulties in ASD, as we showed a general poorer emotion…

  4. Are there differential deficits in facial emotion recognition between paranoid and non-paranoid schizophrenia? A signal detection analysis.

    Science.gov (United States)

    Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long

    2013-10-30

    This study assessed facial emotion recognition abilities in subjects with paranoid and non-paranoid schizophrenia (NPS) using signal detection theory. We explore the differential deficits in facial emotion recognition in 44 paranoid patients with schizophrenia (PS) and 30 non-paranoid patients with schizophrenia (NPS), compared to 80 healthy controls. We used morphed faces with different intensities of emotion and computed the sensitivity index (d') of each emotion. The results showed that performance differed between the schizophrenia and healthy controls groups in the recognition of both negative and positive affects. The PS group performed worse than the healthy controls group but better than the NPS group in overall performance. Performance differed between the NPS and healthy controls groups in the recognition of all basic emotions and neutral faces; between the PS and healthy controls groups in the recognition of angry faces; and between the PS and NPS groups in the recognition of happiness, anger, sadness, disgust, and neutral affects. The facial emotion recognition impairment in schizophrenia may reflect a generalized deficit rather than a negative-emotion specific deficit. The PS group performed worse than the control group, but better than the NPS group in facial expression recognition, with differential deficits between PS and NPS patients.

  5. Is the emotion recognition deficit associated with frontotemporal dementia caused by selective inattention to diagnostic facial features?

    Science.gov (United States)

    Oliver, Lindsay D; Virani, Karim; Finger, Elizabeth C; Mitchell, Derek G V

    2014-07-01

    Frontotemporal dementia (FTD) is a debilitating neurodegenerative disorder characterized by severely impaired social and emotional behaviour, including emotion recognition deficits. Though fear recognition impairments seen in particular neurological and developmental disorders can be ameliorated by reallocating attention to critical facial features, the possibility that similar benefits can be conferred to patients with FTD has yet to be explored. In the current study, we examined the impact of presenting distinct regions of the face (whole face, eyes-only, and eyes-removed) on the ability to recognize expressions of anger, fear, disgust, and happiness in 24 patients with FTD and 24 healthy controls. A recognition deficit was demonstrated across emotions by patients with FTD relative to controls. Crucially, removal of diagnostic facial features resulted in an appropriate decline in performance for both groups; furthermore, patients with FTD demonstrated a lack of disproportionate improvement in emotion recognition accuracy as a result of isolating critical facial features relative to controls. Thus, unlike some neurological and developmental disorders featuring amygdala dysfunction, the emotion recognition deficit observed in FTD is not likely driven by selective inattention to critical facial features. Patients with FTD also mislabelled negative facial expressions as happy more often than controls, providing further evidence for abnormalities in the representation of positive affect in FTD. This work suggests that the emotional expression recognition deficit associated with FTD is unlikely to be rectified by adjusting selective attention to diagnostic features, as has proven useful in other select disorders.

  6. Perceived Parenting Mediates Serotonin Transporter Gene (5-HTTLPR) and Neural System Function during Facial Recognition: A Pilot Study.

    Science.gov (United States)

    Nishikawa, Saori; Toshima, Tamotsu; Kobayashi, Masao

    2015-01-01

    This study examined changes in prefrontal oxy-Hb levels measured by NIRS (Near-Infrared Spectroscopy) during a facial-emotion recognition task in healthy adults, testing a mediational/moderational model of these variables. Fifty-three healthy adults (male = 35, female = 18) aged between 22 to 37 years old (mean age = 24.05 years old) provided saliva samples, completed a EMBU questionnaire (Swedish acronym for Egna Minnen Beträffande Uppfostran [My memories of upbringing]), and participated in a facial-emotion recognition task during NIRS recording. There was a main effect of maternal rejection on RoxH (right frontal activation during an ambiguous task), and a gene × environment (G × E) interaction on RoxH, suggesting that individuals who carry the SL or LL genotype and who endorse greater perceived maternal rejection show less right frontal activation than SL/LL carriers with lower perceived maternal rejection. Finally, perceived parenting style played a mediating role in right frontal activation via the 5-HTTLPR genotype. Early-perceived parenting might influence neural activity in an uncertain situation i.e. rating ambiguous faces among individuals with certain genotypes. This preliminary study makes a small contribution to the mapping of an influence of gene and behaviour on the neural system. More such attempts should be made in order to clarify the links.

  7. Perceived Parenting Mediates Serotonin Transporter Gene (5-HTTLPR and Neural System Function during Facial Recognition: A Pilot Study.

    Directory of Open Access Journals (Sweden)

    Saori Nishikawa

    Full Text Available This study examined changes in prefrontal oxy-Hb levels measured by NIRS (Near-Infrared Spectroscopy during a facial-emotion recognition task in healthy adults, testing a mediational/moderational model of these variables. Fifty-three healthy adults (male = 35, female = 18 aged between 22 to 37 years old (mean age = 24.05 years old provided saliva samples, completed a EMBU questionnaire (Swedish acronym for Egna Minnen Beträffande Uppfostran [My memories of upbringing], and participated in a facial-emotion recognition task during NIRS recording. There was a main effect of maternal rejection on RoxH (right frontal activation during an ambiguous task, and a gene × environment (G × E interaction on RoxH, suggesting that individuals who carry the SL or LL genotype and who endorse greater perceived maternal rejection show less right frontal activation than SL/LL carriers with lower perceived maternal rejection. Finally, perceived parenting style played a mediating role in right frontal activation via the 5-HTTLPR genotype. Early-perceived parenting might influence neural activity in an uncertain situation i.e. rating ambiguous faces among individuals with certain genotypes. This preliminary study makes a small contribution to the mapping of an influence of gene and behaviour on the neural system. More such attempts should be made in order to clarify the links.

  8. Deficits in facial emotion recognition indicate behavioral changes and impaired self-awareness after moderate to severe traumatic brain injury.

    Directory of Open Access Journals (Sweden)

    Jacoba M Spikman

    Full Text Available Traumatic brain injury (TBI is a leading cause of disability, specifically among younger adults. Behavioral changes are common after moderate to severe TBI and have adverse consequences for social and vocational functioning. It is hypothesized that deficits in social cognition, including facial affect recognition, might underlie these behavioral changes. Measurement of behavioral deficits is complicated, because the rating scales used rely on subjective judgement, often lack specificity and many patients provide unrealistically positive reports of their functioning due to impaired self-awareness. Accordingly, it is important to find performance based tests that allow objective and early identification of these problems. In the present study 51 moderate to severe TBI patients in the sub-acute and chronic stage were assessed with a test for emotion recognition (FEEST and a questionnaire for behavioral problems (DEX with a self and proxy rated version. Patients performed worse on the total score and on the negative emotion subscores of the FEEST than a matched group of 31 healthy controls. Patients also exhibited significantly more behavioral problems on both the DEX self and proxy rated version, but proxy ratings revealed more severe problems. No significant correlation was found between FEEST scores and DEX self ratings. However, impaired emotion recognition in the patients, and in particular of Sadness and Anger, was significantly correlated with behavioral problems as rated by proxies and with impaired self-awareness. This is the first study to find these associations, strengthening the proposed recognition of social signals as a condition for adequate social functioning. Hence, deficits in emotion recognition can be conceived as markers for behavioral problems and lack of insight in TBI patients. This finding is also of clinical importance since, unlike behavioral problems, emotion recognition can be objectively measured early after injury

  9. Deficits in facial emotion recognition indicate behavioral changes and impaired self-awareness after moderate to severe traumatic brain injury.

    Science.gov (United States)

    Spikman, Jacoba M; Milders, Maarten V; Visser-Keizer, Annemarie C; Westerhof-Evers, Herma J; Herben-Dekker, Meike; van der Naalt, Joukje

    2013-01-01

    Traumatic brain injury (TBI) is a leading cause of disability, specifically among younger adults. Behavioral changes are common after moderate to severe TBI and have adverse consequences for social and vocational functioning. It is hypothesized that deficits in social cognition, including facial affect recognition, might underlie these behavioral changes. Measurement of behavioral deficits is complicated, because the rating scales used rely on subjective judgement, often lack specificity and many patients provide unrealistically positive reports of their functioning due to impaired self-awareness. Accordingly, it is important to find performance based tests that allow objective and early identification of these problems. In the present study 51 moderate to severe TBI patients in the sub-acute and chronic stage were assessed with a test for emotion recognition (FEEST) and a questionnaire for behavioral problems (DEX) with a self and proxy rated version. Patients performed worse on the total score and on the negative emotion subscores of the FEEST than a matched group of 31 healthy controls. Patients also exhibited significantly more behavioral problems on both the DEX self and proxy rated version, but proxy ratings revealed more severe problems. No significant correlation was found between FEEST scores and DEX self ratings. However, impaired emotion recognition in the patients, and in particular of Sadness and Anger, was significantly correlated with behavioral problems as rated by proxies and with impaired self-awareness. This is the first study to find these associations, strengthening the proposed recognition of social signals as a condition for adequate social functioning. Hence, deficits in emotion recognition can be conceived as markers for behavioral problems and lack of insight in TBI patients. This finding is also of clinical importance since, unlike behavioral problems, emotion recognition can be objectively measured early after injury, allowing for early

  10. Facial and prosodic emotion recognition deficits associate with specific clusters of psychotic symptoms in schizophrenia.

    Directory of Open Access Journals (Sweden)

    Huai-Hsuan Tseng

    Full Text Available BACKGROUND: Patients with schizophrenia perform significantly worse on emotion recognition tasks than healthy participants across several sensory modalities. Emotion recognition abilities are correlated with the severity of clinical symptoms, particularly negative symptoms. However, the relationships between specific deficits of emotion recognition across sensory modalities and the presentation of psychotic symptoms remain unclear. The current study aims to explore how emotion recognition ability across modalities and neurocognitive function correlate with clusters of psychotic symptoms in patients with schizophrenia. METHODS: 111 participants who met the DSM-IV diagnostic criteria for schizophrenia and 70 healthy participants performed on a dual-modality emotion recognition task, the Diagnostic Analysis of Nonverbal Accuracy 2-Taiwan version (DANVA-2-TW, and selected subscales of WAIS-III. Of all, 92 patients received neurocognitive evaluations, including CPT and WCST. These patients also received the PANSS for clinical evaluation of symptomatology. RESULTS: The emotion recognition ability of patients with schizophrenia was significantly worse than healthy participants in both facial and vocal modalities, particularly fearful emotion. An inverse correlation was noted between PANSS total score and recognition accuracy for happy emotion. The difficulty of happy emotion recognition and earlier age of onset, together with the perseveration error in WCST predicted total PANSS score. Furthermore, accuracy of happy emotion and the age of onset were the only two significant predictors of delusion/hallucination. All the associations with happy emotion recognition primarily concerned happy prosody. DISCUSSION: Deficits in emotional processing in specific categories, i.e. in happy emotion, together with deficit in executive function, may reflect dysfunction of brain systems underlying severity of psychotic symptoms, in particular the positive dimension.

  11. Design and Realization of Web for Facial Recognition%Web方式人脸识别的设计与实现

    Institute of Scientific and Technical Information of China (English)

    闾素红; 任艳娜

    2012-01-01

    人脸识别技术涉及模式识别、图像处理、计算机视觉等多种学科知识,在近些年来一直是研究的热点,本文将人脸识别技术与数字视频监控技术相结合,设计了一种基于WEB方式下的远程人脸识别监控系统.%Facial recognition relates to many disciplines such as Pattern Recognition,Image processing ,Computer Vision, and so on. It is a hot issue recently. This paper combines Facial recognition technology and Video surveillance technology, and designs system of Web for facial recognition.

  12. Analysis, Interpretation, and Recognition of Facial Action Units and Expressions Using Neuro-Fuzzy Modeling

    CERN Document Server

    Khademi, Mahmoud; Manzuri-Shalmani, Mohammad T; Kiaei, Ali A

    2010-01-01

    In this paper an accurate real-time sequence-based system for representation, recognition, interpretation, and analysis of the facial action units (AUs) and expressions is presented. Our system has the following characteristics: 1) employing adaptive-network-based fuzzy inference systems (ANFIS) and temporal information, we developed a classification scheme based on neuro-fuzzy modeling of the AU intensity, which is robust to intensity variations, 2) using both geometric and appearance-based features, and applying efficient dimension reduction techniques, our system is robust to illumination changes and it can represent the subtle changes as well as temporal information involved in formation of the facial expressions, and 3) by continuous values of intensity and employing top-down hierarchical rule-based classifiers, we can develop accurate human-interpretable AU-to-expression converters. Extensive experiments on Cohn-Kanade database show the superiority of the proposed method, in comparison with support vect...

  13. Recognition of facial emotions and identity in patients with mesial temporal lobe and idiopathic generalized epilepsy: an eye-tracking study.

    Science.gov (United States)

    Gomez-Ibañez, Asier; Urrestarazu, Elena; Viteri, Cesar

    2014-11-01

    To describe visual scanning pattern for facial identity recognition (FIR) and emotion recognition (FER) in patients with idiopathic generalized (IGE) and mesial temporal lobe epilepsy (MTLE). Secondary endpoint was to correlate the results with cognitive function. Benton Facial Recognition Test (BFRT) and Ekman&Friesen series were performed for FIR and FER respectively in 23 controls, 20 IGE and 19 MTLE patients. Eye movements were recorded by a Hi-Speed eye-tracker system. Neuropsychological tools explored cognitive function. Correct FIR rate was 78% in controls, 70.7% in IGE and 67.4% (p=0.009) in MTLE patients. FER hits reached 82.7% in controls, 74.3% in IGE (p=0.006) and 73.4% in MTLE (p=0.002) groups. IGE patients failed in disgust (p=0.005) and MTLE ones in fear (p=0.009) and disgust (p=0.03). FER correlated with neuropsychological scores, particularly verbal fluency (r=0.542, p<0.001). Eye-tracking revealed that controls scanned faces more diffusely than IGE and MTLE patients for FIR, who tended to top facial areas. A longer scanning of the top facial area was found in the three groups for FER. Gap between top and bottom facial region fixation time decreased in MTLE patients, with more but shorter fixations in bottom facial region. However, none of these findings were statistically significant. FIR was impaired in MTLE patients, and FER in both IGE and MTLE, particularly for fear and disgust. Although not statistically significant, those with impaired FER tended to perform more diffuse eye-tracking over the faces and have cognitive dysfunction. Copyright © 2014 British Epilepsy Association. Published by Elsevier Ltd. All rights reserved.

  14. Automatic Human Facial Expression Recognition Based on Integrated Classifier From Monocular Video with Uncalibrated Camera

    Directory of Open Access Journals (Sweden)

    Yu Tao

    2017-01-01

    Full Text Available An automatic recognition framework for human facial expressions from a monocular video with an uncalibrated camera is proposed. The expression characteristics are first acquired from a kind of deformable template, similar to a facial muscle distribution. After associated regularization, the time sequences from the trait changes in space-time under complete expressional production are then arranged line by line in a matrix. Next, the matrix dimensionality is reduced by a method of manifold learning of neighborhood-preserving embedding. Finally, the refined matrix containing the expression trait information is recognized by a classifier that integrates the hidden conditional random field (HCRF and support vector machine (SVM. In an experiment using the Cohn–Kanade database, the proposed method showed a comparatively higher recognition rate than the individual HCRF or SVM methods in direct recognition from two-dimensional human face traits. Moreover, the proposed method was shown to be more robust than the typical Kotsia method because the former contains more structural characteristics of the data to be classified in space-time

  15. Misrecognition of facial expressions in delinquents

    Directory of Open Access Journals (Sweden)

    Matsuura Naomi

    2009-09-01

    Full Text Available Abstract Background Previous reports have suggested impairment in facial expression recognition in delinquents, but controversy remains with respect to how such recognition is impaired. To address this issue, we investigated facial expression recognition in delinquents in detail. Methods We tested 24 male adolescent/young adult delinquents incarcerated in correctional facilities. We compared their performances with those of 24 age- and gender-matched control participants. Using standard photographs of facial expressions illustrating six basic emotions, participants matched each emotional facial expression with an appropriate verbal label. Results Delinquents were less accurate in the recognition of facial expressions that conveyed disgust than were control participants. The delinquents misrecognized the facial expressions of disgust as anger more frequently than did controls. Conclusion These results suggest that one of the underpinnings of delinquency might be impaired recognition of emotional facial expressions, with a specific bias toward interpreting disgusted expressions as hostile angry expressions.

  16. Facial Action Unit Recognition under Incomplete Data Based on Multi-label Learning with Missing Labels

    KAUST Repository

    Li, Yongqiang

    2016-07-07

    Facial action unit (AU) recognition has been applied in a wild range of fields, and has attracted great attention in the past two decades. Most existing works on AU recognition assumed that the complete label assignment for each training image is available, which is often not the case in practice. Labeling AU is expensive and time consuming process. Moreover, due to the AU ambiguity and subjective difference, some AUs are difficult to label reliably and confidently. Many AU recognition works try to train the classifier for each AU independently, which is of high computation cost and ignores the dependency among different AUs. In this work, we formulate AU recognition under incomplete data as a multi-label learning with missing labels (MLML) problem. Most existing MLML methods usually employ the same features for all classes. However, we find this setting is unreasonable in AU recognition, as the occurrence of different AUs produce changes of skin surface displacement or face appearance in different face regions. If using the shared features for all AUs, much noise will be involved due to the occurrence of other AUs. Consequently, the changes of the specific AUs cannot be clearly highlighted, leading to the performance degradation. Instead, we propose to extract the most discriminative features for each AU individually, which are learned by the supervised learning method. The learned features are further embedded into the instance-level label smoothness term of our model, which also includes the label consistency and the class-level label smoothness. Both a global solution using st-cut and an approximated solution using conjugate gradient (CG) descent are provided. Experiments on both posed and spontaneous facial expression databases demonstrate the superiority of the proposed method in comparison with several state-of-the-art works.

  17. Enhanced retinal modeling for face recognition and facial feature point detection under complex illumination conditions

    Science.gov (United States)

    Cheng, Yong; Li, Zuoyong; Jiao, Liangbao; Lu, Hong; Cao, Xuehong

    2016-07-01

    We improved classic retinal modeling to alleviate the adverse effect of complex illumination on face recognition and extracted robust image features. Our improvements on classic retinal modeling included three aspects. First, a combined filtering scheme was applied to simulate functions of horizontal and amacrine cells for accurate local illumination estimation. Second, we developed an optimal threshold method for illumination classification. Finally, we proposed an adaptive factor acquisition model based on the arctangent function. Experimental results on the combined Yale B; the Carnegie Mellon University poses, illumination, and expression; and the Labeled Face Parts in the Wild databases show that the proposed method can effectively alleviate illumination difference of images under complex illumination conditions, which is helpful for improving the accuracy of face recognition and that of facial feature point detection.

  18. Subjective disturbance of perception is related to facial affect recognition in schizophrenia.

    Science.gov (United States)

    Comparelli, Anna; De Carolis, Antonella; Corigliano, Valentina; Romano, Silvia; Kotzalidis, Giorgio D; Campana, Chiara; Ferracuti, Stefano; Tatarelli, Roberto; Girardi, Paolo

    2011-10-01

    To examine the relationship between facial affect recognition (FAR) and subjective perceptual disturbances (SPDs), we assessed SPDs in 82 patients with DSM-IV schizophrenia (44 with first-episode psychosis [FEP] and 38 with multiple episodes [ME]) using two subscales of the Frankfurt Complaint Questionnaire (FCQ), WAS (simple perception) and WAK (complex perception). Emotional judgment ability was assessed using Ekman and Friesen's FAR task. Impaired recognition of emotion correlated with scores on the WAS but not on the WAK. The association was significant in the entire group and in the ME group. FAR was more impaired in the ME than in the FEP group. Our findings suggest that there is a relationship between SPDs and FAR impairment in schizophrenia, particularly in multiple-episode patients.

  19. Facial Expression Recognition by Supervised Independent Component Analysis Using MAP Estimation

    Science.gov (United States)

    Chen, Fan; Kotani, Kazunori

    Permutation ambiguity of the classical Independent Component Analysis (ICA) may cause problems in feature extraction for pattern classification. Especially when only a small subset of components is derived from data, these components may not be most distinctive for classification, because ICA is an unsupervised method. We include a selective prior for de-mixing coefficients into the classical ICA to alleviate the problem. Since the prior is constructed upon the classification information from the training data, we refer to the proposed ICA model with a selective prior as a supervised ICA (sICA). We formulated the learning rule for sICA by taking a Maximum a Posteriori (MAP) scheme and further derived a fixed point algorithm for learning the de-mixing matrix. We investigate the performance of sICA in facial expression recognition from the aspects of both correct rate of recognition and robustness even with few independent components.

  20. Emotion Recognition following Pediatric Traumatic Brain Injury: Longitudinal Analysis of Emotional Prosody and Facial Emotion Recognition

    Science.gov (United States)

    Schmidt, Adam T.; Hanten, Gerri R.; Li, Xiaoqi; Orsten, Kimberley D.; Levin, Harvey S.

    2010-01-01

    Children with closed head injuries often experience significant and persistent disruptions in their social and behavioral functioning. Studies with adults sustaining a traumatic brain injury (TBI) indicate deficits in emotion recognition and suggest that these difficulties may underlie some of the social deficits. The goal of the current study was…

  1. 人脸面部表情识别%Facial Expression Recognition

    Institute of Scientific and Technical Information of China (English)

    傅栩雨; 叶健东; 王鹏; 曾颖森

    2015-01-01

    In recent years, interaction and intelligence are issues attracting more and more attention. Facial expression recognition, as a significant part of artificial intelligence, enhances the friendly and intelligent human-machine interaction through facial emotion recognition. The paper describes the complete process of emotion recognition from the real-time images of camera to the final realization of emotion recognition and result display. The paper outlines the whole process instead of solely focusing on a part of the content. And the multiple aspects of content involved are introduced one by one, from theory to application. The specific methods of use are pointed out, and the modules with similar functions are selected and compared based on the practical application process.%近年来交互,智能成为了大家很关注的问题,人脸表情识别是人工智能中有重大意义的一部分,通过面部情绪的识别,增进人机交往的友好性和智能性。本文讲述了情绪识别的完整过程,从摄像头的实时影像中开始到最后实现情绪识别,显示识别结果。不单一地侧重某一部分内容,而进行整体过程的勾画。同时将涉及到的多方面内容从原理到应用讲逐个讲解,点明使用的具体方法,结合实际应用过程对功能相近的模块进行选择和对比。

  2. Facial emotion recognition system for autistic children: a feasible study based on FPGA implementation.

    Science.gov (United States)

    Smitha, K G; Vinod, A P

    2015-11-01

    Children with autism spectrum disorder have difficulty in understanding the emotional and mental states from the facial expressions of the people they interact. The inability to understand other people's emotions will hinder their interpersonal communication. Though many facial emotion recognition algorithms have been proposed in the literature, they are mainly intended for processing by a personal computer, which limits their usability in on-the-move applications where portability is desired. The portability of the system will ensure ease of use and real-time emotion recognition and that will aid for immediate feedback while communicating with caretakers. Principal component analysis (PCA) has been identified as the least complex feature extraction algorithm to be implemented in hardware. In this paper, we present a detailed study of the implementation of serial and parallel implementation of PCA in order to identify the most feasible method for realization of a portable emotion detector for autistic children. The proposed emotion recognizer architectures are implemented on Virtex 7 XC7VX330T FFG1761-3 FPGA. We achieved 82.3% detection accuracy for a word length of 8 bits.

  3. Elementary neurocognitive function, facial affect recognition and social-skills in schizophrenia.

    Science.gov (United States)

    Meyer, Melissa B; Kurtz, Matthew M

    2009-05-01

    Social-skill deficits are pervasive in schizophrenia and negatively impact many key aspects of functioning. Prior studies have found that measures of elementary neurocognition and social cognition are related to social-skills. In the present study we selected a range of neurocognitive measures and examined their relationship with identification of happy and sad faces and performance-based social-skills. Fifty-three patients with schizophrenia or schizoaffective disorder participated. Results revealed that: 1) visual vigilance, problem-solving and affect recognition were related to social-skill; 2) links between problem-solving and social-skill, but not visual vigilance and social-skill, remained significant when estimates of verbal intelligence were controlled; 3) affect recognition deficits explained unique variance in social-skill after neurocognitive variables were controlled; and 4) affect recognition deficits partially mediated the relationship of visual vigilance and social-skill. These results support the conclusion that facial affect recognition deficits are a crucial domain of impairment in schizophrenia that both contribute unique variance to social-skill deficits and may also mediate the relationship between some aspects of neurocognition and social-skill. These findings may help guide the development and refinement of cognitive and social-cognitive remediation methods for social-skill impairment.

  4. An Efficient Multimodal 2D + 3D Feature-based Approach to Automatic Facial Expression Recognition

    KAUST Repository

    Li, Huibin

    2015-07-29

    We present a fully automatic multimodal 2D + 3D feature-based facial expression recognition approach and demonstrate its performance on the BU-3DFE database. Our approach combines multi-order gradient-based local texture and shape descriptors in order to achieve efficiency and robustness. First, a large set of fiducial facial landmarks of 2D face images along with their 3D face scans are localized using a novel algorithm namely incremental Parallel Cascade of Linear Regression (iPar-CLR). Then, a novel Histogram of Second Order Gradients (HSOG) based local image descriptor in conjunction with the widely used first-order gradient based SIFT descriptor are used to describe the local texture around each 2D landmark. Similarly, the local geometry around each 3D landmark is described by two novel local shape descriptors constructed using the first-order and the second-order surface differential geometry quantities, i.e., Histogram of mesh Gradients (meshHOG) and Histogram of mesh Shape index (curvature quantization, meshHOS). Finally, the Support Vector Machine (SVM) based recognition results of all 2D and 3D descriptors are fused at both feature-level and score-level to further improve the accuracy. Comprehensive experimental results demonstrate that there exist impressive complementary characteristics between the 2D and 3D descriptors. We use the BU-3DFE benchmark to compare our approach to the state-of-the-art ones. Our multimodal feature-based approach outperforms the others by achieving an average recognition accuracy of 86.32%. Moreover, a good generalization ability is shown on the Bosphorus database.

  5. Learning weighted sparse representation of encoded facial normal information for expression-robust 3D face recognition

    KAUST Repository

    Li, Huibin

    2011-10-01

    This paper proposes a novel approach for 3D face recognition by learning weighted sparse representation of encoded facial normal information. To comprehensively describe 3D facial surface, three components, in X, Y, and Z-plane respectively, of normal vector are encoded locally to their corresponding normal pattern histograms. They are finally fed to a sparse representation classifier enhanced by learning based spatial weights. Experimental results achieved on the FRGC v2.0 database prove that the proposed encoded normal information is much more discriminative than original normal information. Moreover, the patch based weights learned using the FRGC v1.0 and Bosphorus datasets also demonstrate the importance of each facial physical component for 3D face recognition. © 2011 IEEE.

  6. Deficits in Facial Expression Recognition in Male Adolescents with Early-Onset or Adolescence-Onset Conduct Disorder

    Science.gov (United States)

    Fairchild, Graeme; Van Goozen, Stephanie H. M.; Calder, Andrew J.; Stollery, Sarah J.; Goodyer, Ian M.

    2009-01-01

    Background: We examined whether conduct disorder (CD) is associated with deficits in facial expression recognition and, if so, whether these deficits are specific to the early-onset form of CD, which emerges in childhood. The findings could potentially inform the developmental taxonomic theory of antisocial behaviour, which suggests that…

  7. Deficits in Facial Expression Recognition in Male Adolescents with Early-Onset or Adolescence-Onset Conduct Disorder

    Science.gov (United States)

    Fairchild, Graeme; Van Goozen, Stephanie H. M.; Calder, Andrew J.; Stollery, Sarah J.; Goodyer, Ian M.

    2009-01-01

    Background: We examined whether conduct disorder (CD) is associated with deficits in facial expression recognition and, if so, whether these deficits are specific to the early-onset form of CD, which emerges in childhood. The findings could potentially inform the developmental taxonomic theory of antisocial behaviour, which suggests that…

  8. A new look at emotion perception: Concepts speed and shape facial emotion recognition.

    Science.gov (United States)

    Nook, Erik C; Lindquist, Kristen A; Zaki, Jamil

    2015-10-01

    Decades ago, the "New Look" movement challenged how scientists thought about vision by suggesting that conceptual processes shape visual perceptions. Currently, affective scientists are likewise debating the role of concepts in emotion perception. Here, we utilized a repetition-priming paradigm in conjunction with signal detection and individual difference analyses to examine how providing emotion labels-which correspond to discrete emotion concepts-affects emotion recognition. In Study 1, pairing emotional faces with emotion labels (e.g., "sad") increased individuals' speed and sensitivity in recognizing emotions. Additionally, individuals with alexithymia-who have difficulty labeling their own emotions-struggled to recognize emotions based on visual cues alone, but not when emotion labels were provided. Study 2 replicated these findings and further demonstrated that emotion concepts can shape perceptions of facial expressions. Together, these results suggest that emotion perception involves conceptual processing. We discuss the implications of these findings for affective, social, and clinical psychology.

  9. 人脸表情识别研究进展%Research Advance of Facial Expression Recognition

    Institute of Scientific and Technical Information of China (English)

    黄建; 李文书; 高玉娟

    2016-01-01

    人脸表情识别(Facial Expression Recognition,FER)是计算机视觉、机器学习、人工智能等领域的重要研究方向,目前已经成为国内外学者的研究热点.介绍了FER系统流程,总结了表情特征提取和表情分类的常用方法以及近年来国内外学者对这些方法的改进,并对这些方法的优缺点进行比较.最后,对目前FER研究的难点问题进行了分析,并对FER未来的发展方向进行展望.

  10. Learning Expressionlets via Universal Manifold Model for Dynamic Facial Expression Recognition

    Science.gov (United States)

    Liu, Mengyi; Shan, Shiguang; Wang, Ruiping; Chen, Xilin

    2016-12-01

    Facial expression is temporally dynamic event which can be decomposed into a set of muscle motions occurring in different facial regions over various time intervals. For dynamic expression recognition, two key issues, temporal alignment and semantics-aware dynamic representation, must be taken into account. In this paper, we attempt to solve both problems via manifold modeling of videos based on a novel mid-level representation, i.e. \\textbf{expressionlet}. Specifically, our method contains three key stages: 1) each expression video clip is characterized as a spatial-temporal manifold (STM) formed by dense low-level features; 2) a Universal Manifold Model (UMM) is learned over all low-level features and represented as a set of local modes to statistically unify all the STMs. 3) the local modes on each STM can be instantiated by fitting to UMM, and the corresponding expressionlet is constructed by modeling the variations in each local mode. With above strategy, expression videos are naturally aligned both spatially and temporally. To enhance the discriminative power, the expressionlet-based STM representation is further processed with discriminant embedding. Our method is evaluated on four public expression databases, CK+, MMI, Oulu-CASIA, and FERA. In all cases, our method outperforms the known state-of-the-art by a large margin.

  11. Facial expression recognition takes longer in the posterior superior temporal sulcus than in the occipital face area.

    Science.gov (United States)

    Pitcher, David

    2014-07-02

    Neuroimaging studies have identified a face-selective region in the right posterior superior temporal sulcus (rpSTS) that responds more strongly during facial expression recognition tasks than during facial identity recognition tasks, but precisely when the rpSTS begins to causally contribute to expression recognition is unclear. The present study addressed this issue using transcranial magnetic stimulation (TMS). In Experiment 1, repetitive TMS delivered over the rpSTS of human participants, at a frequency of 10 Hz for 500 ms, selectively impaired a facial expression task but had no effect on a matched facial identity task. In Experiment 2, participants performed the expression task only while double-pulse TMS (dTMS) was delivered over the rpSTS or over the right occipital face area (rOFA), a face-selective region in lateral occipital cortex, at different latencies up to 210 ms after stimulus onset. Task performance was selectively impaired when dTMS was delivered over the rpSTS at 60-100 ms and 100-140 ms. dTMS delivered over the rOFA impaired task performance at 60-100 ms only. These results demonstrate that the rpSTS causally contributes to expression recognition and that it does so over a longer time-scale than the rOFA. This difference in the length of the TMS induced impairment between the rpSTS and the rOFA suggests that the neural computations that contribute to facial expression recognition in each region are functionally distinct.

  12. Geometric Feature-Based Facial Expression Recognition in Image Sequences Using Multi-Class AdaBoost and Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Joonwhoan Lee

    2013-06-01

    Full Text Available Facial expressions are widely used in the behavioral interpretation of emotions, cognitive science, and social interactions. In this paper, we present a novel method for fully automatic facial expression recognition in facial image sequences. As the facial expression evolves over time facial landmarks are automatically tracked in consecutive video frames, using displacements based on elastic bunch graph matching displacement estimation. Feature vectors from individual landmarks, as well as pairs of landmarks tracking results are extracted, and normalized, with respect to the first frame in the sequence. The prototypical expression sequence for each class of facial expression is formed, by taking the median of the landmark tracking results from the training facial expression sequences. Multi-class AdaBoost with dynamic time warping similarity distance between the feature vector of input facial expression and prototypical facial expression, is used as a weak classifier to select the subset of discriminative feature vectors. Finally, two methods for facial expression recognition are presented, either by using multi-class AdaBoost with dynamic time warping, or by using support vector machine on the boosted feature vectors. The results on the Cohn-Kanade (CK+ facial expression database show a recognition accuracy of 95.17% and 97.35% using multi-class AdaBoost and support vector machines, respectively.

  13. The Development of Dynamic Facial Expression Recognition at Different Intensities in 4- to 18-Year-Olds

    Science.gov (United States)

    Montirosso, Rosario; Peverelli, Milena; Frigerio, Elisa; Crespi, Monica; Borgatti, Renato

    2010-01-01

    The primary purpose of this study was to examine the effect of the intensity of emotion expression on children's developing ability to label emotion during a dynamic presentation of five facial expressions (anger, disgust, fear, happiness, and sadness). A computerized task (AFFECT--animated full facial expression comprehension test) was used to…

  14. The Development of Dynamic Facial Expression Recognition at Different Intensities in 4- to 18-Year-Olds

    Science.gov (United States)

    Montirosso, Rosario; Peverelli, Milena; Frigerio, Elisa; Crespi, Monica; Borgatti, Renato

    2010-01-01

    The primary purpose of this study was to examine the effect of the intensity of emotion expression on children's developing ability to label emotion during a dynamic presentation of five facial expressions (anger, disgust, fear, happiness, and sadness). A computerized task (AFFECT--animated full facial expression comprehension test) was used to…

  15. Assessment of perception of morphed facial expressions using the Emotion Recognition Task: Normative data from healthy participants aged

    NARCIS (Netherlands)

    Kessels, R.P.C.; Montagne, B.; Hendriks, A.W.C.J.; Perrett, D.I.; Haan, E.H.F. de

    2014-01-01

    The ability to recognize and label emotional facial expressions is an important aspect of social cognition. However, existing paradigms to examine this ability present only static facial expressions, suffer from ceiling effects or have limited or no norms. A computerized test, the Emotion Recognitio

  16. A voxel-based morphometry study of gray matter correlates of facial emotion recognition in bipolar disorder.

    Science.gov (United States)

    Neves, Maila de Castro L; Albuquerque, Maicon Rodrigues; Malloy-Diniz, Leandro; Nicolato, Rodrigo; Silva Neves, Fernando; de Souza-Duran, Fábio Luis; Busatto, Geraldo; Corrêa, Humberto

    2015-08-30

    Facial emotion recognition (FER) is one of the many cognitive deficits reported in bipolar disorder (BD) patients. The aim of this study was to investigate neuroanatomical correlates of FER impairments in BD type I (BD-I). Participants comprised 21 euthymic BD-I patients without Axis I DSM IV-TR comorbidities and 21 healthy controls who were assessed using magnetic resonance imaging and the Penn Emotion Recognition Test (ER40). Preprocessing of images used DARTEL (diffeomorphic anatomical registration through exponentiated Lie algebra) for optimized voxel-based morphometry in SPM8. Compared with healthy subjects, BD-I patients performed poorly in on the ER40 and had reduced gray matter volume (GMV) in the left orbitofrontal cortex, superior portion of the temporal pole and insula. In the BD-I group, the statistical maps indicated a direct correlation between FER on the ER40 and right middle cingulate gyrus GMV. Our findings are consistent with the previous studies regarding the overlap of multiple brain networks of social cognition and BD neurobiology, particularly components of the anterior-limbic neural network. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. Socio-demographic and Clinical Correlates of Facial Expression Recognition Disorder in the Euthymic Phase of Bipolar Patients

    Science.gov (United States)

    Moriano, Christian; Farruggio, Lisa; Jover, Frédéric

    2016-01-01

    Objective: Bipolar patients show social cognitive disorders. The objective of this study is to review facial expression recognition (FER) disorders in bipolar patients (BP) and explore clinical heterogeneity factors that could affect them in the euthymic phase: socio-demographic level, clinical and changing characteristics of the disorder, history of suicide attempt, and abuse. Method: Thirty-four euthymic bipolar patients and 29 control subjects completed a computer task of explicit facial expression recognition and were clinically evaluated. Results: Compared with control subjects, BP patients show: a decrease in fear, anger, and disgust recognition; an extended reaction time for disgust, surprise and neutrality recognition; confusion between fear and surprise, anger and disgust, disgust and sadness, sadness and neutrality. In BP patients, age negatively affects anger and neutrality recognition, as opposed to education level which positively affects recognizing these emotions. The history of patient abuse negatively affects surprise and disgust recognition, and the number of suicide attempts negatively affects disgust and anger recognition. Conclusions: Cognitive heterogeneity in euthymic phase BP patients is affected by several factors inherent to bipolar disorder complexity that should be considered in social cognition study. PMID:27310226

  18. Socio-demographic and Clinical Correlates of Facial Expression Recognition Disorder in the Euthymic Phase of Bipolar Patients.

    Science.gov (United States)

    Iakimova, Galina; Moriano, Christian; Farruggio, Lisa; Jover, Frédéric

    2016-10-01

    Bipolar patients show social cognitive disorders. The objective of this study is to review facial expression recognition (FER) disorders in bipolar patients (BP) and explore clinical heterogeneity factors that could affect them in the euthymic phase: socio-demographic level, clinical and changing characteristics of the disorder, history of suicide attempt, and abuse. Thirty-four euthymic bipolar patients and 29 control subjects completed a computer task of explicit facial expression recognition and were clinically evaluated. Compared with control subjects, BP patients show: a decrease in fear, anger, and disgust recognition; an extended reaction time for disgust, surprise and neutrality recognition; confusion between fear and surprise, anger and disgust, disgust and sadness, sadness and neutrality. In BP patients, age negatively affects anger and neutrality recognition, as opposed to education level which positively affects recognizing these emotions. The history of patient abuse negatively affects surprise and disgust recognition, and the number of suicide attempts negatively affects disgust and anger recognition. Cognitive heterogeneity in euthymic phase BP patients is affected by several factors inherent to bipolar disorder complexity that should be considered in social cognition study. © The Author(s) 2016.

  19. Analysis of differences between Western and East-Asian faces based on facial region segmentation and PCA for facial expression recognition

    Science.gov (United States)

    Benitez-Garcia, Gibran; Nakamura, Tomoaki; Kaneko, Masahide

    2017-01-01

    Darwin was the first one to assert that facial expressions are innate and universal, which are recognized across all cultures. However, recent some cross-cultural studies have questioned this assumed universality. Therefore, this paper presents an analysis of the differences between Western and East-Asian faces of the six basic expressions (anger, disgust, fear, happiness, sadness and surprise) focused on three individual facial regions of eyes-eyebrows, nose and mouth. The analysis is conducted by applying PCA for two feature extraction methods: appearance-based by using the pixel intensities of facial parts, and geometric-based by handling 125 feature points from the face. Both methods are evaluated using 4 standard databases for both racial groups and the results are compared with a cross-cultural human study applied to 20 participants. Our analysis reveals that differences between Westerns and East-Asians exist mainly on the regions of eyes-eyebrows and mouth for expressions of fear and disgust respectively. This work presents important findings for a better design of automatic facial expression recognition systems based on the difference between two racial groups.

  20. Facial affect recognition in body dysmorphic disorder versus obsessive-compulsive disorder: An eye-tracking study.

    Science.gov (United States)

    Toh, Wei Lin; Castle, David J; Rossell, Susan L

    2015-10-01

    Body dysmorphic disorder (BDD) is characterised by repetitive behaviours and/or mental acts occurring in response to preoccupations with perceived defects or flaws in physical appearance (American Psychiatric Association, 2013). This study aimed to investigate facial affect recognition in BDD using an integrated eye-tracking paradigm. Participants were 21 BDD patients, 19 obsessive-compulsive disorder (OCD) patients and 21 healthy controls (HC), who were age-, sex-, and IQ-matched. Stimuli were from the Pictures of Facial Affect (Ekman & Friesen, 1975), and outcome measures were affect recognition accuracy as well as spatial and temporal scanpath parameters. Relative to OCD and HC groups, BDD patients demonstrated significantly poorer facial affect perception and an angry recognition bias. An atypical scanning strategy encompassing significantly more blinks, fewer fixations of extended mean durations, higher mean saccade amplitudes, and less visual attention devoted to salient facial features was found. Patients with BDD were substantially impaired in the scanning of faces, and unable to extract affect-related information, likely indicating deficits in basic perceptual operations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Emotion recognition in pictures of facial affect: Is there a difference between forensic and non-forensic patients with schizophrenia?

    Directory of Open Access Journals (Sweden)

    Wiebke Wolfkühler

    Full Text Available Background and Objectives: Abundant research has demonstrated that patients with schizophrenia have difficulties in recognizing the emotional content in facial expressions. However, there is a paucity of studies on emotion recognition in schizophrenia patients with a history of violent behavior compared to patients without a criminal record. Methods: Emotion recognition skills were examined in thirty-three forensic patients with schizophrenia. In addition, executive function and psychopathology was assessed. Results were compared to a group of 38 schizophrenia patients in regular psychiatric care and to a healthy control group. Results: Both patient groups performed more poorly on almost all tasks compared to controls. However, in the forensic group the recognition of the expression of disgust was preserved. When the excitement factor of the Positive and Negative Syndrome Scale was co-varied out, forensic patients outperformed the non-forensic patient group on emotion recognition across modalities. Conclusions: The superior recognition of disgust could be uniquely associated with delinquent behavior.

  2. Development of and psychometric testing for the Brief Pain Inventory-Facial in patients with facial pain syndromes.

    Science.gov (United States)

    Lee, John Y K; Chen, H Isaac; Urban, Christopher; Hojat, Anahita; Church, Ephraim; Xie, Sharon X; Farrar, John T

    2010-09-01

    Outcomes in clinical trials on trigeminal pain therapies require instruments with demonstrated reliability and validity. The authors evaluated the Brief Pain Inventory (BPI) in its existing form plus an additional 7 facial-specific items in patients referred to a single neurosurgeon for a diagnosis of facial pain. The complete 18-item instrument is referred to as the BPI-Facial. This study was a cross-sectional analysis of patients who completed the BPI-Facial. The diagnosis of classic versus atypical trigeminal neuralgia (TN) was made before analyzing the questionnaire results. A hypothesis-driven factor analysis was used to determine the principal components of the questionnaire. Item reliability and questionnaire validity were tested for these specific constructs. Data from 156 patients were analyzed, including 114 patients (73%) with classic and 42 (27%) with atypical TN. Using orthomax rotation factor analysis, 3 factors with an eigenvalue > 1.0 were identified-pain intensity, interference with general activities, and facial-specific pain interference-accounting for 97.6% of the observed item variance. Retention of the 3 factors was confirmed via a Cattell scree plot. Internal reliability was demonstrated by calculating Cronbach's alpha: 0.86 for pain intensity, 0.89 for interference with general activities, 0.95 for facial-specific pain interference, and 0.94 for the entire instrument. Initial validity of the BPI-Facial instrument was supported by the detection of statistically significant differences between patients with classic versus atypical pain. Patients with atypical TN rated their facial pain as more intense (atypical 6.24 vs classic 5.03, p = 0.013) and as having greater interference in general activities (atypical 6.94 vs classic 5.43, p = 0.0033). Both groups expressed high levels of facial-specific pain interference (atypical 6.34 vs classic 5.95, p = 0.527). The BPI-Facial is a rigorous measure of facial pain in patients with TN and appears to

  3. [Emotional facial recognition difficulties as primary deficit in children with attention deficit hyperactivity disorder: a systematic review].

    Science.gov (United States)

    Rodrigo-Ruiz, D; Perez-Gonzalez, J C; Cejudo, J

    2017-08-16

    It has recently been warned that children with attention deficit hyperactivity disorder (ADHD) show a deficit in emotional competence and emotional intelligence, specifically in their ability to emotional recognition. A systematic review of the scientific literature in reference to the emotional recognition of facial expressions in children with ADHD is presented in order to establish or rule the existence of emotional deficits as primary dysfunction in this disorder and, where appropriate, the effect size of the differences against normal development or neurotypical children. The results reveal the recent interest in the issue and the lack of information. Although there is no complete agreement, most of the studies show that emotional recognition of facial expressions is affected in children with ADHD, showing them significantly less accurate than children from control groups in recognizing emotions communicated through facial expressions. A part of these studies make comparisons on the recognition of different discrete emotions; having observed that children with ADHD tend to a greater difficulty recognizing negative emotions, especially anger, fear, and disgust. These results have direct implications for the educational and clinical diagnosis of ADHD; and for the educational intervention for children with ADHD, emotional education might entail an advantageous aid.

  4. Alexithymic and somatisation scores in patients with temporomandibular pain disorder correlate with deficits in facial emotion recognition.

    Science.gov (United States)

    Haas, J; Eichhammer, P; Traue, H C; Hoffmann, H; Behr, M; Crönlein, T; Pieh, C; Busch, V

    2013-02-01

    Current studies suggest dysfunctional emotional processing as a key factor in the aetiology of temporomandibular disorder (TMD). Investigating facial emotion recognition (FER) may offer an elegant and reliable way to study emotional processing in patients with TMD. Twenty patients with TMD and the same number of age-, sex- and education-matched controls were measured with the Facially Expressed Emotion Labelling (FEEL) test, the 26-item Toronto Alexithymia Scale (TAS-26), the Screening for Somatoform Symptoms (SOMS-2a), the German Pain Questionnaire and the 21-item Hamilton Depression Rating Scale (HAMD). The patients had significantly lower Total FEEL Scores (P = 0·021) as compared to the controls, indicating a lower accuracy of FER. Furthermore, we were able to demonstrate significant group differences with respect to the following issues: patients were more alexithymic (P = 0·006), stated more somatoform symptoms (P < 0·004) and had higher depressive scores in the HAMD (P < 0·003). The factors alexithymia and somatisation could explain 31% (adjusted 27%) of the variance of the FEEL Scores in the sample. The estimation of the standardised regression coefficients suggests an equivalent influence of TAS-26 and SOMS-2a on the FEEL Scores, whereas 'group' (patients versus healthy controls) and depressive symptoms did not contribute significantly to the model. Our findings highlight FER deficits in patients with TMD, which are partially explained by concomitant alexithymia and somatisation. As suggested previously, impaired FER in patients with TMD may further point to probable aetiological proximities between TMD and somatoform disorders.

  5. Callous-unemotional traits and empathy deficits: Mediating effects of affective perspective-taking and facial emotion recognition.

    Science.gov (United States)

    Lui, Joyce H L; Barry, Christopher T; Sacco, Donald F

    2016-09-01

    Although empathy deficits are thought to be associated with callous-unemotional (CU) traits, findings remain equivocal and little is known about what specific abilities may underlie these purported deficits. Affective perspective-taking (APT) and facial emotion recognition may be implicated, given their independent associations with both empathy and CU traits. The current study examined how CU traits relate to cognitive and affective empathy and whether APT and facial emotion recognition mediate these relations. Participants were 103 adolescents (70 males) aged 16-18 attending a residential programme. CU traits were negatively associated with cognitive and affective empathy to a similar degree. The association between CU traits and affective empathy was partially mediated by APT. Results suggest that assessing mechanisms that may underlie empathic deficits, such as perspective-taking, may be important for youth with CU traits and may inform targets of intervention.

  6. Wanting it Too Much: An Inverse Relation Between Social Motivation and Facial Emotion Recognition in Autism Spectrum Disorder

    OpenAIRE

    Garman, Heather D.; Spaulding, Christine J.; Webb, Sara Jane; Mikami, Amori Yee; Morris, James P.; Lerner, Matthew D

    2016-01-01

    This study examined social motivation and early-stage face perception as frameworks for understanding impairments in facial emotion recognition (FER) in a well-characterized sample of youth with autism spectrum disorders (ASD). Early-stage face perception (N170 event-related potential latency) was recorded while participants completed a standardized FER task, while social motivation was obtained via parent report. Participants with greater social motivation exhibited poorer FER, while those w...

  7. Precentral and inferior prefrontal hypoactivation during facial emotion recognition in patients with schizophrenia: A functional near-infrared spectroscopy study.

    Science.gov (United States)

    Watanuki, Toshio; Matsuo, Koji; Egashira, Kazuteru; Nakashima, Mami; Harada, Kenichiro; Nakano, Masayuki; Matsubara, Toshio; Takahashi, Kanji; Watanabe, Yoshifumi

    2016-01-01

    Although patients with schizophrenia demonstrate abnormal processing of emotional face recognition, the neural substrates underlying this process remain unclear. We previously showed abnormal fronto-temporal function during facial expression of emotions, and cognitive inhibition in patients with schizophrenia using functional near-infrared spectroscopy (fNIRS). The aim of the current study was to use fNIRS to identify which brain regions involved in recognizing emotional faces are impaired in patients with schizophrenia, and to determine the neural substrates underlying the response to emotional facial expressions per se, and to facial expressions with cognitive inhibition. We recruited 19 patients with schizophrenia and 19 healthy controls, statistically matched on age, sex, and premorbid IQ. Brain function was measured by fNIRS during emotional face assessment and face identification tasks. Patients with schizophrenia showed lower activation of the right precentral and inferior frontal areas during the emotional face task compared to controls. Further, patients with schizophrenia were slower and less accurate in completing tasks compared to healthy participants. Decreasing performance was associated with increasing severity of the disease. Our present and prior studies suggest that the impaired behavioral performance in schizophrenia is associated with different mechanisms for processing emotional facial expressions versus facial expressions combined with cognitive inhibition.

  8. Survey of spontaneous facial expression recognition%自发表情识别方法综述

    Institute of Scientific and Technical Information of China (English)

    何俊; 何忠文; 蔡建峰; 房灵芝

    2016-01-01

    This paper introduced the actuality and the developing level of spontaneous facial expression recognition at the present time,and paid particular attention to the key technology on the research of spontaneous facial expression recognition. This paper aimed to arouse researchers’attention and interests into this new field,to participate in the study of the spontane-ous facial expression recognition problems actively,and to achieve more successes correlated to this problem.%介绍了目前自发表情识别研究的现状与发展水平,详细阐述了自发表情识别研究的内容和方法,以及自发表情识别研究的关键技术,旨在引起研究者对此新兴研究方向的关注与兴趣,从而积极参与对自发表情识别问题的研究,并推动与此相关问题的进展。

  9. Facial emotion recognition in alcohol and substance use disorders: A meta-analysis.

    Science.gov (United States)

    Castellano, Filippo; Bartoli, Francesco; Crocamo, Cristina; Gamba, Giulia; Tremolada, Martina; Santambrogio, Jacopo; Clerici, Massimo; Carrà, Giuseppe

    2015-12-01

    People with alcohol and substance use disorders (AUDs/SUDs) show worse facial emotion recognition (FER) than controls, though magnitude and potential moderators remain unknown. The aim of this meta-analysis was to estimate the association between AUDs, SUDs and FER impairment. Electronic databases were searched through April 2015. Pooled analyses were based on standardized mean differences between index and control groups with 95% confidence intervals, weighting each study with random effects inverse variance models. Risk of publication bias and role of potential moderators, including task type, were explored. Nineteen of 70 studies assessed for eligibility met the inclusion criteria, comprising 1352 individuals, of whom 714 (53%) had AUDs or SUDs. The association between substance related disorders and FER performance showed an effect size of -0.67 (-0.95, -0.39), and -0.65 (-0.93, -0.37) for AUDs and SUDs, respectively. There was no publication bias and subgroup and sensitivity analyses based on potential moderators confirmed core results. Future longitudinal research should confirm these findings, clarifying the role of specific clinical issues of AUDs and SUDs.

  10. Facial recognition trial: biometric identification of non-compliant subjects using CCTV

    Science.gov (United States)

    Best, Tim

    2007-10-01

    LogicaCMG were provided with an opportunity to deploy a facial recognition system in a realistic scenario. 12 cameras were installed at an international airport covering all entrances to the immigration hall. The evaluation took place over several months with numerous adjustments to both the hardware (i.e. cameras, servers and capture cards) and software. The learning curve has been very steep but a stage has now been reached where both LogicaCMG and the client are confident that, subject to the right environmental conditions (lighting and camera location) an effective system can be defined with a high probability of successful detection of the target individual, with minimal false alarms. To the best of our knowledge, results with a >90% detection rate, of non-compliant subjects 'at range' has not been achieved anywhere else. This puts this location at the forefront of capability in this area. The results achieved demonstrate that, given optimised conditions, it is possible to achieve a long range biometric identification of a non compliant subject, with a high rate of success.

  11. Using sensors and facial expression recognition to personalize emotion learning for autistic children.

    Science.gov (United States)

    Gay, Valerie; Leijdekkers, Peter; Wong, Frederick

    2013-01-01

    This paper describes CaptureMyEmotion, an app for smartphones and tablets which uses wireless sensors to capture physiological data together with facial expression recognition to provide a very personalized way to help autistic children identify and understand their emotions. Many apps are targeting autistic children and their carer, but none of the existing apps uses the full potential offered by mobile technology and sensors to overcome one of autistic children's main difficulty: the identification and expression of emotions. CaptureMyEmotion enables autistic children to capture photos, videos or sounds, and identify the emotion they felt while taking the picture. Simultaneously, a self-portrait of the child is taken, and the app measures the arousal and stress levels using wireless sensors. The app uses the self-portrait to provide a better estimate of the emotion felt by the child. The app has the potential to help autistic children understand their emotions and it gives the carer insight into the child's emotions and offers a means to discuss the child's feelings.

  12. Assessment of perception of morphed facial expressions using the Emotion Recognition Task: normative data from healthy participants aged 8-75.

    Science.gov (United States)

    Kessels, Roy P C; Montagne, Barbara; Hendriks, Angelique W; Perrett, David I; de Haan, Edward H F

    2014-03-01

    The ability to recognize and label emotional facial expressions is an important aspect of social cognition. However, existing paradigms to examine this ability present only static facial expressions, suffer from ceiling effects or have limited or no norms. A computerized test, the Emotion Recognition Task (ERT), was developed to overcome these difficulties. In this study, we examined the effects of age, sex, and intellectual ability on emotion perception using the ERT. In this test, emotional facial expressions are presented as morphs gradually expressing one of the six basic emotions from neutral to four levels of intensity (40%, 60%, 80%, and 100%). The task was administered in 373 healthy participants aged 8-75. In children aged 8-17, only small developmental effects were found for the emotions anger and happiness, in contrast to adults who showed age-related decline on anger, fear, happiness, and sadness. Sex differences were present predominantly in the adult participants. IQ only minimally affected the perception of disgust in the children, while years of education were correlated with all emotions but surprise and disgust in the adult participants. A regression-based approach was adopted to present age- and education- or IQ-adjusted normative data for use in clinical practice. Previous studies using the ERT have demonstrated selective impairments on specific emotions in a variety of psychiatric, neurologic, or neurodegenerative patient groups, making the ERT a valuable addition to existing paradigms for the assessment of emotion perception.

  13. Associations between facial emotion recognition, cognition and alexithymia in patients with schizophrenia: comparison of photographic and virtual reality presentations.

    Science.gov (United States)

    Gutiérrez-Maldonado, J; Rus-Calafell, M; Márquez-Rejón, S; Ribas-Sabaté, J

    2012-01-01

    Emotion recognition is known to be impaired in schizophrenia patients. Although cognitive deficits and symptomatology have been associated with this impairment there are other patient characteristics, such as alexithymia, which have not been widely explored. Emotion recognition is normally assessed by means of photographs, although they do not reproduce the dynamism of human expressions. Our group has designed and validated a virtual reality (VR) task to assess and subsequently train schizophrenia patients. The present study uses this VR task to evaluate the impaired recognition of facial affect in patients with schizophrenia and to examine its association with cognitive deficit and the patients' inability to express feelings. Thirty clinically stabilized outpatients with a well-established diagnosis of schizophrenia or schizoaffective disorder were assessed in neuropsychological, symptomatic and affective domains. They then performed the facial emotion recognition task. Statistical analyses revealed no significant differences between the two presentation conditions (photographs and VR) in terms of overall errors made. However, anger and fear were easier to recognize in VR than in photographs. Moreover, strong correlations were found between psychopathology and the errors made.

  14. The Reverse-Caricature Effect Revisited: Familiarization With Frontal Facial Caricatures Improves Veridical Face Recognition

    OpenAIRE

    RODRÍGUEZ, JOBANY; Bortfeld, Heather; RUDOMÍN, ISAAC; HERNÁNDEZ, BENJAMÍN; GUTIÉRREZ-OSUNA, RICARDO

    2009-01-01

    Prior research suggests that recognition of a person's face can be facilitated by exaggerating the distinctive features of the face during training. We tested if this ‘reverse-caricature effect’ would be robust to procedural variations that created more difficult learning environments. Specifically, we examined whether the effect would emerge with frontal rather than three-quarter views, after very brief exposure to caricatures during the learning phase and after modest rotations of faces dur...

  15. 中美两国6种基本面部表情识别的跨文化研究%Cultural Differences in the Recognition of Facial Emotion between Chinese and American

    Institute of Scientific and Technical Information of China (English)

    汤艳清; 欧凤荣; 吴枫; 孔令韬

    2011-01-01

    目的 对中美两国健康成年人对愉快、愤怒、恐惧、悲伤、厌恶和中性6种情绪面部表情的识别率进行跨文化研究.方法 对82名中国健康志愿者和61名美国健康志愿者进行情绪面孔识别测试.结果 中国人对厌恶、恐惧和愉快表情的识别率明显低于美国人,美国人对于男性愤怒和女性悲伤面孔的识别率显著高于中国人.结论 了解情绪面孔识别的文化差异和相容性,可以使我们更好地揭示人类共同的情绪产生的物质基础,并帮助我们更好地理解不同背景下人类的行为.%Objective Investigate the cultural differences in the recognition of facial emotion between Chinese and American. Methods The facial emotion recognition test was used in 82 Chinese and 61 American to compare the cultural differences in the recognition of facial emotion. Results Chinese had lower recognition rates in disgust,fear,happiness,male anger and female sadness facial emotions compared with American. Conclusion Cultural differences in the recognition of facial emotion may reflect the foundation of human emotion and improve our understanding of people's behavior between different cultures.

  16. Facial emotion recognition in childhood-onset bipolar I disorder: an evaluation of developmental differences between youths and adults

    Science.gov (United States)

    Wegbreit, Ezra; Weissman, Alexandra B; Cushman, Grace K; Puzia, Megan E; Kim, Kerri L; Leibenluft, Ellen; Dickstein, Daniel P

    2015-01-01

    Objectives Bipolar disorder (BD) is a severe mental illness with high healthcare costs and poor outcomes. Increasing numbers of youths are diagnosed with BD, and many adults with BD report their symptoms started in childhood, suggesting BD can be a developmental disorder. Studies advancing our understanding of BD have shown alterations in facial emotion recognition in both children and adults with BD compared to healthy comparison (HC) participants, but none have evaluated the development of these deficits. To address this, we examined the effect of age on facial emotion recognition in a sample that included children and adults with confirmed childhood-onset type-I BD, with the adults having been diagnosed and followed since childhood by the Course and Outcome in Bipolar Youth study. Methods Using the Diagnostic Analysis of Non-Verbal Accuracy, we compared facial emotion recognition errors among participants with BD (n = 66; ages 7–26 years) and HC participants (n = 87; ages 7–25 years). Complementary analyses investigated errors for child and adult faces. Results A significant diagnosis-by-age interaction indicated that younger BD participants performed worse than expected relative to HC participants their own age. The deficits occurred for both child and adult faces and were particularly strong for angry child faces, which were most often mistaken as sad. Our results were not influenced by medications, comorbidities/substance use, or mood state/global functioning. Conclusions Younger individuals with BD are worse than their peers at this important social skill. This deficit may be an important developmentally salient treatment target, i.e., for cognitive remediation to improve BD youths’ emotion recognition abilities. PMID:25951752

  17. Facial emotion recognition in childhood-onset bipolar I disorder: an evaluation of developmental differences between youths and adults.

    Science.gov (United States)

    Wegbreit, Ezra; Weissman, Alexandra B; Cushman, Grace K; Puzia, Megan E; Kim, Kerri L; Leibenluft, Ellen; Dickstein, Daniel P

    2015-08-01

    Bipolar disorder (BD) is a severe mental illness with high healthcare costs and poor outcomes. Increasing numbers of youths are diagnosed with BD, and many adults with BD report that their symptoms started in childhood, suggesting that BD can be a developmental disorder. Studies advancing our understanding of BD have shown alterations in facial emotion recognition both in children and adults with BD compared to healthy comparison (HC) participants, but none have evaluated the development of these deficits. To address this, we examined the effect of age on facial emotion recognition in a sample that included children and adults with confirmed childhood-onset type-I BD, with the adults having been diagnosed and followed since childhood by the Course and Outcome in Bipolar Youth study. Using the Diagnostic Analysis of Non-Verbal Accuracy, we compared facial emotion recognition errors among participants with BD (n = 66; ages 7-26 years) and HC participants (n = 87; ages 7-25 years). Complementary analyses investigated errors for child and adult faces. A significant diagnosis-by-age interaction indicated that younger BD participants performed worse than expected relative to HC participants their own age. The deficits occurred both for child and adult faces and were particularly strong for angry child faces, which were most often mistaken as sad. Our results were not influenced by medications, comorbidities/substance use, or mood state/global functioning. Younger individuals with BD are worse than their peers at this important social skill. This deficit may be an important developmentally salient treatment target - that is, for cognitive remediation to improve BD youths' emotion recognition abilities. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  18. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    Science.gov (United States)

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  19. From Facial Emotional Recognition Abilities to Emotional Attribution: A Study in Down Syndrome

    Science.gov (United States)

    Hippolyte, Loyse; Barisnikov, Koviljka; Van der Linden, Martial; Detraux, Jean-Jacques

    2009-01-01

    Facial expression processing and the attribution of facial emotions to a context were investigated in adults with Down syndrome (DS) in two experiments. Their performances were compared with those of a child control group matched for receptive vocabulary. The ability to process faces without emotional content was controlled for, and no differences…

  20. Recognition, Expression, and Understanding Facial Expressions of Emotion in Adolescents with Nonverbal and General Learning Disabilities

    Science.gov (United States)

    Bloom, Elana; Heath, Nancy

    2010-01-01

    Children with nonverbal learning disabilities (NVLD) have been found to be worse at recognizing facial expressions than children with verbal learning disabilities (LD) and without LD. However, little research has been done with adolescents. In addition, expressing and understanding facial expressions is yet to be studied among adolescents with LD…

  1. From Facial Emotional Recognition Abilities to Emotional Attribution: A Study in Down Syndrome

    Science.gov (United States)

    Hippolyte, Loyse; Barisnikov, Koviljka; Van der Linden, Martial; Detraux, Jean-Jacques

    2009-01-01

    Facial expression processing and the attribution of facial emotions to a context were investigated in adults with Down syndrome (DS) in two experiments. Their performances were compared with those of a child control group matched for receptive vocabulary. The ability to process faces without emotional content was controlled for, and no differences…

  2. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    Science.gov (United States)

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  3. A New Technology:3 D Facial Recognition%面部识别新技术:三维面部识别

    Institute of Scientific and Technical Information of China (English)

    王玥; 李丽娜

    2014-01-01

    3D face recognition is a reliable technology t in the field of facial recognition, and has been widely applied in sensitive places. This paper describes the development of 3D facial recognition, technical characteristics, difficulties and hotspots within the application. The future development of 3D facial recognition is also prospected in the end.%三维面部识别是面部识别领域中一项识别率可靠的技术,已经在国内外一些敏感应用场所得到了推广使用。文章介绍了三维面部识别的发展、技术特点、难点与应用热点,最后对三维面部识别的未来进行了展望。

  4. Impaired facial emotion recognition in patients with mesial temporal lobe epilepsy associated with hippocampal sclerosis (MTLE-HS): Side and age at onset matters.

    Science.gov (United States)

    Hlobil, Ulf; Rathore, Chaturbhuj; Alexander, Aley; Sarma, Sankara; Radhakrishnan, Kurupath

    2008-08-01

    To define the determinants of impaired facial emotion recognition (FER) in patients with mesial temporal lobe epilepsy associated with hippocampal sclerosis (MTLE-HS), we examined 76 patients with unilateral MTLE-HS, 36 prior to antero-mesial temporal lobectomy (AMTL) and 40 after AMTL, and 28 healthy control subjects with a FER test consisting of 60 items (20 each for anger, fear, and happiness). Mean percentages of the accurate responses were calculated for different subgroups: right vs. left MTLE-HS, early (age at onset Happiness recognition was significantly better in post-AMTL MTLE-HS patients compared to pre-AMTL patients while anger and fear recognition did not differ. We conclude that patients with right MTLE-HS with age at seizure onset <6 years are maximally predisposed to impaired fear recognition. In them, right AMTL does not further worsen FER abilities. Longitudinal studies comparing FER in the same patients before and after AMTL will be required to refine and confirm our cross-sectional observations.

  5. Asymmetry of Facial Mimicry and Emotion Perception in Patients With Unilateral Facial Paralysis.

    Science.gov (United States)

    Korb, Sebastian; Wood, Adrienne; Banks, Caroline A; Agoulnik, Dasha; Hadlock, Tessa A; Niedenthal, Paula M

    2016-05-01

    The ability of patients with unilateral facial paralysis to recognize and appropriately judge facial expressions remains underexplored. To test the effects of unilateral facial paralysis on the recognition of and judgments about facial expressions of emotion and to evaluate the asymmetry of facial mimicry. Patients with left or right unilateral facial paralysis at a university facial plastic surgery unit completed 2 computer tasks involving video facial expression recognition. Side of facial paralysis was used as a between-participant factor. Facial function and symmetry were verified electronically with the eFACE facial function scale. Across 2 tasks, short videos were shown on which facial expressions of happiness and anger unfolded earlier on one side of the face or morphed into each other. Patients indicated the moment or side of change between facial expressions and judged their authenticity. Type, time, and accuracy of responses on a keyboard were analyzed. A total of 57 participants (36 women and 21 men) aged 20 to 76 years (mean age, 50.2 years) and with mild left or right unilateral facial paralysis were included in the study. Patients with right facial paralysis were faster (by about 150 milliseconds) and more accurate (mean number of errors, 1.9 vs 2.5) to detect expression onsets on the left side of the stimulus face, suggesting anatomical asymmetry of facial mimicry. Patients with left paralysis, however, showed more anomalous responses, which partly differed by emotion. The findings favor the hypothesis of an anatomical asymmetry of facial mimicry and suggest that patients with a left hemiparalysis could be more at risk of developing a cluster of disabilities and psychological conditions including emotion-recognition impairments. 3.

  6. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind.

    Science.gov (United States)

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J; Sadato, Norihiro

    2013-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.

  7. Early visual experience and the recognition of basic facial expressions: Involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind

    Directory of Open Access Journals (Sweden)

    Ryo eKitada

    2013-01-01

    Full Text Available Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus and posterior superior temporal sulcus in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early-blind individuals. In a psychophysical experiment, both early-blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control. The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.

  8. Empathy and recognition of facial expressions of emotion in sex offenders, non-sex offenders and normal controls.

    Science.gov (United States)

    Gery, Isabelle; Miljkovitch, Raphaële; Berthoz, Sylvie; Soussignan, Robert

    2009-02-28

    Research conducted on empathy and emotional recognition in sex offenders is contradictory. The present study was aimed to clarify this issue by controlling for some affective and social variables (depression, anxiety, and social desirability) that are presumed to influence emotional and empathic measures, using a staged multicomponent model of empathy. Incarcerated sex offenders (child molesters), incarcerated non-sex offenders, and non-offender controls (matched for age, gender, and education level) performed a recognition task of facial expressions of basic emotions that varied in intensity, and completed various self-rating scales designed to assess distinct components of empathy (perspective taking, affective empathy, empathy concern, and personal distress), as well as depression, anxiety, and social desirability. Sex offenders were less accurate than the other participants in recognizing facial expressions of anger, disgust, surprise and fear, with problems in confusing fear with surprise, and disgust with anger. Affective empathy was the only component that discriminated sex offenders from non-sex offenders and was correlated with accuracy recognition of emotional expressions. Although our findings must be replicated with a larger number of participants, they support the view that sex offenders might have impairments in the decoding of some emotional cues conveyed by the conspecifics' face, which could have an impact on affective empathy.

  9. Effect of facial expressions on student's comprehension recognition in virtual educational environments.

    Science.gov (United States)

    Sathik, Mohamed; Jonathan, Sofia G

    2013-01-01

    The scope of this research is to examine whether facial expression of the students is a tool for the lecturer to interpret comprehension level of students in virtual classroom and also to identify the impact of facial expressions during lecture and the level of comprehension shown by these expressions. Our goal is to identify physical behaviours of the face that are linked to emotional states, and then to identify how these emotional states are linked to student's comprehension. In this work, the effectiveness of a student's facial expressions in non-verbal communication in a virtual pedagogical environment was investigated first. Next, the specific elements of learner's behaviour for the different emotional states and the relevant facial expressions signaled by the action units were interpreted. Finally, it focused on finding the impact of the relevant facial expression on the student's comprehension. Experimentation was done through survey, which involves quantitative observations of the lecturers in the classroom in which the behaviours of students were recorded and statistically analyzed. The result shows that facial expression is the most frequently used nonverbal communication mode by the students in the virtual classroom and facial expressions of the students are significantly correlated to their emotions which helps to recognize their comprehension towards the lecture.

  10. Emotion Recognition Ability Test Using JACFEE Photos: A Validity/Reliability Study of a War Veterans' Sample and Their Offspring.

    Science.gov (United States)

    Castro-Vale, Ivone; Severo, Milton; Carvalho, Davide; Mota-Cardoso, Rui

    2015-01-01

    Emotion recognition is very important for social interaction. Several mental disorders influence facial emotion recognition. War veterans and their offspring are subject to an increased risk of developing psychopathology. Emotion recognition is an important aspect that needs to be addressed in this population. To our knowledge, no test exists that is validated for use with war veterans and their offspring. The current study aimed to validate the JACFEE photo set to study facial emotion recognition in war veterans and their offspring. The JACFEE photo set was presented to 135 participants, comprised of 62 male war veterans and 73 war veterans' offspring. The participants identified the facial emotion presented from amongst the possible seven emotions that were tested for: anger, contempt, disgust, fear, happiness, sadness, and surprise. A loglinear model was used to evaluate whether the agreement between the intended and the chosen emotions was higher than the expected. Overall agreement between chosen and intended emotions was 76.3% (Cohen kappa = 0.72). The agreement ranged from 63% (sadness expressions) to 91% (happiness expressions). The reliability by emotion ranged from 0.617 to 0.843 and the overall JACFEE photo set Cronbach alpha was 0.911. The offspring showed higher agreement when compared with the veterans (RR: 41.52 vs 12.12, p emotion recognition ability in the study sample of war veterans and their respective offspring.

  11. 人脸表情识别方法及展望%Methods and outlook for facial expression recognition

    Institute of Scientific and Technical Information of China (English)

    李菊霞

    2009-01-01

    人脸表情识(facial expression recognition,简称FER)作为智能化人机交互技术中的一个重要组成部分,有着广泛的应用前景和潜在的市场价值,近年来得到了广泛的关注,涌现出许多新方法.本文综述了近年来国内外人脸表情识别(FER)的研究进展并对未来的人脸表情识别发展方向进行了展望.

  12. Automated smartphone audiometry: Validation of a word recognition test app.

    Science.gov (United States)

    Dewyer, Nicholas A; Jiradejvong, Patpong; Henderson Sabes, Jennifer; Limb, Charles J

    2017-05-23

    Develop and validate an automated smartphone word recognition test. Cross-sectional case-control diagnostic test comparison. An automated word recognition test was developed as an app for a smartphone with earphones. English-speaking adults with recent audiograms and various levels of hearing loss were recruited from an audiology clinic and were administered the smartphone word recognition test. Word recognition scores determined by the smartphone app and the gold standard speech audiometry test performed by an audiologist were compared. Test scores for 37 ears were analyzed. Word recognition scores determined by the smartphone app and audiologist testing were in agreement, with 86% of the data points within a clinically acceptable margin of error and a linear correlation value between test scores of 0.89. The WordRec automated smartphone app accurately determines word recognition scores. 3b Laryngoscope, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  13. Recognition of facial expressions of different emotional intensities in patients with frontotemporal lobar degeneration

    NARCIS (Netherlands)

    Kessels, Roy P. C.; Gerritsen, Lotte; Montagne, Barbara; Ackl, Nibal; Diehl, Janine; Danek, Adrian

    2007-01-01

    Behavioural problems are a key feature of frontotemporal lobar degeneration (FTLD). Also, FTLD patients show impairments in emotion processing. Specifically, the perception of negative emotional facial expressions is affected. Generally, however, negative emotional expressions are regarded as more d

  14. 5-HTTLPR modulates the recognition accuracy and exploration of emotional facial expressions

    OpenAIRE

    2014-01-01

    Individual genetic differences in the serotonin transporter-linked polymorphic region (5-HTTLPR) have been associated with variations in the sensitivity to social and emotional cues as well as altered amygdala reactivity to facial expressions of emotion. Amygdala activation has further been shown to trigger gaze changes towards diagnostically relevant facial features. The current study examined whether altered socio-emotional reactivity in variants of the 5-HTTLPR promoter polymorphism reflec...

  15. Does facial resemblance enhance cooperation?

    Directory of Open Access Journals (Sweden)

    Trang Giang

    Full Text Available Facial self-resemblance has been proposed to serve as a kinship cue that facilitates cooperation between kin. In the present study, facial resemblance was manipulated by morphing stimulus faces with the participants' own faces or control faces (resulting in self-resemblant or other-resemblant composite faces. A norming study showed that the perceived degree of kinship was higher for the participants and the self-resemblant composite faces than for actual first-degree relatives. Effects of facial self-resemblance on trust and cooperation were tested in a paradigm that has proven to be sensitive to facial trustworthiness, facial likability, and facial expression. First, participants played a cooperation game in which the composite faces were shown. Then, likability ratings were assessed. In a source memory test, participants were required to identify old and new faces, and were asked to remember whether the faces belonged to cooperators or cheaters in the cooperation game. Old-new recognition was enhanced for self-resemblant faces in comparison to other-resemblant faces. However, facial self-resemblance had no effects on the degree of cooperation in the cooperation game, on the emotional evaluation of the faces as reflected in the likability judgments, and on the expectation that a face belonged to a cooperator rather than to a cheater. Therefore, the present results are clearly inconsistent with the assumption of an evolved kin recognition module built into the human face recognition system.

  16. 5-HTTLPR modulates the recognition accuracy and exploration of emotional facial expressions

    Directory of Open Access Journals (Sweden)

    Sabrina eBoll

    2014-07-01

    Full Text Available Individual genetic differences in the serotonin transporter-linked polymorphic region (5-HTTLPR have been associated with variations in the sensitivity to social and emotional cues as well as altered amygdala reactivity to facial expressions of emotion. Amygdala activation has further been shown to trigger gaze changes towards diagnostically relevant facial features. The current study examined whether altered socio-emotional reactivity in variants of the 5-HTTLPR promoter polymorphism reflects individual differences in attending to diagnostic features of facial expressions. For this purpose, visual exploration of emotional facial expressions was compared between a low (n=39 and a high (n=40 5-HTT expressing group of healthy human volunteers in an eye tracking paradigm. Emotional faces were presented while manipulating the initial fixation such that saccadic changes towards the eyes and towards the mouth could be identified. We found that the low versus the high 5-HTT group demonstrated greater accuracy with regard to emotion classifications, particularly when faces were presented for a longer duration. No group differences in gaze orientation towards diagnostic facial features could be observed. However, participants in the low 5-HTT group exhibited more and faster fixation changes for certain emotions when faces were presented for a longer duration and overall face fixation times were reduced for this genotype group. These results suggest that the 5-HTT gene influences social perception by modulating the general vigilance to social cues rather than selectively affecting the pre-attentive detection of diagnostic facial features.

  17. Facial Expression Recognition Based on RGB-D%基于RGB-D的人脸表情识别研究

    Institute of Scientific and Technical Information of China (English)

    吴会霞; 陶青川; 龚雪友

    2016-01-01

    针对二维人脸表情识别在复杂光照及光照条件较差时,识别准确率较低的问题,提出一种基于RGB-D 的融合多分类器的面部表情识别的算法。该算法首先在图像的彩色信息(Y、Cr、Q)和深度信息(D)上分别提取其LPQ,Gabor,LBP 以及HOG 特征信息,并对提取的高维特征信息做线性降维(PCA)及特征空间转换(LDA),而后用最近邻分类法得到各表情弱分类器,并用AdaBoost 算法权重分配弱分类器从而生成强分类器,最后用Bayes 进行多分类器的融合,统计输出平均识别率。在具有复杂光照条件变化的人脸表情库CurtinFaces 和KinectFaceDB 上,该算法平均识别率最高达到98.80%。试验结果表明:比较于单独彩色图像的表情识别算法,深度信息的融合能够更加明显的提升面部表情识别的识别率,并且具有一定的应用价值。%For two-dimensional facial expression recognition complex when poor lighting and illumination conditions, a low recognition rate of prob-lem, proposes a facial expression recognition algorithm based on multi-feature RGB-D fusion. Extracts their LPQ, Gabor, LBP and HOG feature information in image color information(Y, Cr, Q) and depth information (D) on, and the extraction of high-dimensional feature in-formation does linear dimensionality reduction (PCA) and feature space conversion (LDA), and then gives each face of weak classifiers nearest neighbor classification, and with AdaBoost algorithm weight distribution of weak classifiers to generate strong classifier, and finally with Bayes multi-classifier fusion, statistical output average recognition rate. With complex changes in lighting conditions and facial ex-pression libraries CurtinFaces KinectFaceDB, the algorithm average recognition rate of up to 98.80%. The results showed that: compared to a separate color image expression recognition algorithm, the fusion depth information can be more

  18. Extremely Preterm-Born Infants Demonstrate Different Facial Recognition Processes at 6-10 Months of Corrected Age.

    Science.gov (United States)

    Frie, Jakob; Padilla, Nelly; Ådén, Ulrika; Lagercrantz, Hugo; Bartocci, Marco

    2016-05-01

    To compare cortical hemodynamic responses to known and unknown facial stimuli between infants born extremely preterm and term-born infants, and to correlate the responses of the extremely preterm-born infants to regional cortical volumes at term-equivalent age. We compared 27 infants born extremely preterm (infrared spectroscopy. In the preterm group, we also performed structural brain magnetic resonance imaging and correlated regional cortical volumes to hemodynamic responses. The preterm-born infants demonstrated different cortical face recognition processes than the term-born infants. They had a significantly smaller hemodynamic response in the right frontotemporal areas while watching their mother's face (0.13 μmol/L vs 0.63 μmol/L; P recognition process compared with term-born infants. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Validation of the WMS-III Facial Memory subtest with the Graduate Hospital Facial Memory Test in a sample of right and left anterior temporal lobectomy patients.

    Science.gov (United States)

    Chiaravalloti, Nancy D; Tulsky, David S; Glosser, Guila

    2004-06-01

    A number of studies have shown visuospatial memory deficits following anterior temporal lobectomy (ATL) in the right, nondominant temporal lobe (RATL). The current study examines 26 patients with intractable temporal lobe epilepsy who underwent ATL in either the right (RATL, n = 16) or left temporal lobe (LATL, n = 10) on two tests of facial memory abilities, the Wechsler Memory Scale-III (WMS-III) Faces subtest and the Graduate Hospital Facial Memory Test (FMT). Repeated measures ANOVA on the FMT indicated a significant main effect of side of surgery. The RATL group performed significantly below the LATL group overall. Both groups showed a slight, but non-significant, improvement in performance from pre- to postsurgery on the FMT immediate memory, likely due to practice effects. Repeated measures ANOVA on the WMS-III Faces subtest revealed a significant interaction of group (RATL vs. LATL) by delay (immediate vs. delayed). Overall, the LATL group showed an improvement in recognition scores from immediate to delayed memory, whereas the RATL group performed similarly at both immediate and delayed testing. No effects of surgery were noted on the WMS-III. Following initial data analysis the WMS-III Faces I and II data were re-scored using the scoring suggested by Holdnack and Delis (2003), earlier in this issue. Repeated measures ANOVA revealed a trend toward significance in the three-way interaction of group (RATL vs. LATL) x time of testing (pre- versus postop) x delay (immediate vs. delayed memory). On the Faces I subtest, both the RATL and LATL groups showed a decline from preoperative to postoperative testing. However, on Faces II the LATL group showed an increase in performance from preoperative to postoperative testing, while the RALT group showed a decline in performance from preoperative to postoperative testing. While the FMT appears to be superior to the WMS-III Faces subtest in identifying deficits in facial memory prior to and following RATL, the

  20. 基于LBP和SVM决策树的人脸表情识别%Facial Expression Recognition Based on LBP and SVM Decision Tree

    Institute of Scientific and Technical Information of China (English)

    李扬; 郭海礁

    2014-01-01

    为了提高人脸表情识别的识别率,提出一种LBP和SVM决策树相结合的人脸表情识别算法。首先利用LBP算法将人脸表情图像转换为LBP特征谱,然后将LBP特征谱转换成LBP直方图特征序列,最后通过SVM决策树算法完成人脸表情的分类和识别,并且在JAFFE人脸表情库的识别中证明该算法的有效性。%In order to improve the recognition rate of facial expression, proposes a facial expression recognition algorithm based on a LBP and SVM decision tree. First facial expression image is converted to LBP characteristic spectrum using LBP algorithm, and then the LBP character-istic spectrum into LBP histogram feature sequence, finally completes the classification and recognition of facial expression by SVM deci-sion tree algorithm, and proves the effectiveness of the proposed method in the recognition of facial expression database in JAFFE.

  1. The Influence of Music on Facial Emotion Recognition in Children with Autism Spectrum Disorder and Neurotypical Children.

    Science.gov (United States)

    Brown, Laura S

    2017-03-01

    Children with autism spectrum disorder (ASD) often struggle with social skills, including the ability to perceive emotions based on facial expressions. Research evidence suggests that many individuals with ASD can perceive emotion in music. Examining whether music can be used to enhance recognition of facial emotion by children with ASD would inform development of music therapy interventions. The purpose of this study was to investigate the influence of music with a strong emotional valance (happy; sad) on children with ASD's ability to label emotions depicted in facial photographs, and their response time. Thirty neurotypical children and 20 children with high-functioning ASD rated expressions of happy, neutral, and sad in 30 photographs under two music listening conditions (sad music; happy music). During each music listening condition, participants rated the 30 images using a 7-point scale that ranged from very sad to very happy. Response time data were also collected across both conditions. A significant two-way interaction revealed that participants' ratings of happy and neutral faces were unaffected by music conditions, but sad faces were perceived to be sadder with sad music than with happy music. Across both conditions, neurotypical children rated the happy faces as happier and the sad faces as sadder than did participants with ASD. Response times of the neurotypical children were consistently shorter than response times of the children with ASD; both groups took longer to rate sad faces than happy faces. Response times of neurotypical children were generally unaffected by the valence of the music condition; however, children with ASD took longer to respond when listening to sad music. Music appears to affect perceptions of emotion in children with ASD, and perceptions of sad facial expressions seem to be more affected by emotionally congruent background music than are perceptions of happy or neutral faces.

  2. Individual Differences in the Speed of Facial Emotion Recognition Show Little Specificity but Are Strongly Related with General Mental Speed: Psychometric, Neural and Genetic Evidence.

    Science.gov (United States)

    Liu, Xinyang; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Cai, Xinxia; Wilhelm, Oliver

    2017-01-01

    Facial identity and facial expression processing are crucial socio-emotional abilities but seem to show only limited psychometric uniqueness when the processing speed is considered in easy tasks. We applied a comprehensive measurement of processing speed and contrasted performance specificity in socio-emotional, social and non-social stimuli from an individual differences perspective. Performance in a multivariate task battery could be best modeled by a general speed factor and a first-order factor capturing some specific variance due to processing emotional facial expressions. We further tested equivalence of the relationships between speed factors and polymorphisms of dopamine and serotonin transporter genes. Results show that the speed factors are not only psychometrically equivalent but invariant in their relation with the Catechol-O-Methyl-Transferase (COMT) Val158Met polymorphism. However, the 5-HTTLPR/rs25531 serotonin polymorphism was related with the first-order factor of emotion perception speed, suggesting a specific genetic correlate of processing emotions. We further investigated the relationship between several components of event-related brain potentials with psychometric abilities, and tested emotion specific individual differences at the neurophysiological level. Results revealed swifter emotion perception abilities to go along with larger amplitudes of the P100 and the Early Posterior Negativity (EPN), when emotion processing was modeled on its own. However, after partialling out the shared variance of emotion perception speed with general processing speed-related abilities, brain-behavior relationships did not remain specific for emotion. Together, the present results suggest that speed abilities are strongly interrelated but show some specificity for emotion processing speed at the psychometric level. At both genetic and neurophysiological levels, emotion specificity depended on whether general cognition is taken into account or not. These

  3. Individual Differences in the Speed of Facial Emotion Recognition Show Little Specificity but Are Strongly Related with General Mental Speed: Psychometric, Neural and Genetic Evidence

    Directory of Open Access Journals (Sweden)

    Xinyang Liu

    2017-08-01

    Full Text Available Facial identity and facial expression processing are crucial socio-emotional abilities but seem to show only limited psychometric uniqueness when the processing speed is considered in easy tasks. We applied a comprehensive measurement of processing speed and contrasted performance specificity in socio-emotional, social and non-social stimuli from an individual differences perspective. Performance in a multivariate task battery could be best modeled by a general speed factor and a first-order factor capturing some specific variance due to processing emotional facial expressions. We further tested equivalence of the relationships between speed factors and polymorphisms of dopamine and serotonin transporter genes. Results show that the speed factors are not only psychometrically equivalent but invariant in their relation with the Catechol-O-Methyl-Transferase (COMT Val158Met polymorphism. However, the 5-HTTLPR/rs25531 serotonin polymorphism was related with the first-order factor of emotion perception speed, suggesting a specific genetic correlate of processing emotions. We further investigated the relationship between several components of event-related brain potentials with psychometric abilities, and tested emotion specific individual differences at the neurophysiological level. Results revealed swifter emotion perception abilities to go along with larger amplitudes of the P100 and the Early Posterior Negativity (EPN, when emotion processing was modeled on its own. However, after partialling out the shared variance of emotion perception speed with general processing speed-related abilities, brain-behavior relationships did not remain specific for emotion. Together, the present results suggest that speed abilities are strongly interrelated but show some specificity for emotion processing speed at the psychometric level. At both genetic and neurophysiological levels, emotion specificity depended on whether general cognition is taken into account

  4. Individual Differences in the Speed of Facial Emotion Recognition Show Little Specificity but Are Strongly Related with General Mental Speed: Psychometric, Neural and Genetic Evidence

    Science.gov (United States)

    Liu, Xinyang; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Cai, Xinxia; Wilhelm, Oliver

    2017-01-01

    Facial identity and facial expression processing are crucial socio-emotional abilities but seem to show only limited psychometric uniqueness when the processing speed is considered in easy tasks. We applied a comprehensive measurement of processing speed and contrasted performance specificity in socio-emotional, social and non-social stimuli from an individual differences perspective. Performance in a multivariate task battery could be best modeled by a general speed factor and a first-order factor capturing some specific variance due to processing emotional facial expressions. We further tested equivalence of the relationships between speed factors and polymorphisms of dopamine and serotonin transporter genes. Results show that the speed factors are not only psychometrically equivalent but invariant in their relation with the Catechol-O-Methyl-Transferase (COMT) Val158Met polymorphism. However, the 5-HTTLPR/rs25531 serotonin polymorphism was related with the first-order factor of emotion perception speed, suggesting a specific genetic correlate of processing emotions. We further investigated the relationship between several components of event-related brain potentials with psychometric abilities, and tested emotion specific individual differences at the neurophysiological level. Results revealed swifter emotion perception abilities to go along with larger amplitudes of the P100 and the Early Posterior Negativity (EPN), when emotion processing was modeled on its own. However, after partialling out the shared variance of emotion perception speed with general processing speed-related abilities, brain-behavior relationships did not remain specific for emotion. Together, the present results suggest that speed abilities are strongly interrelated but show some specificity for emotion processing speed at the psychometric level. At both genetic and neurophysiological levels, emotion specificity depended on whether general cognition is taken into account or not. These

  5. Acute effects of delta-9-tetrahydrocannabinol, cannabidiol and their combination on facial emotion recognition: a randomised, double-blind, placebo-controlled study in cannabis users.

    Science.gov (United States)

    Hindocha, Chandni; Freeman, Tom P; Schafer, Grainne; Gardener, Chelsea; Das, Ravi K; Morgan, Celia J A; Curran, H Valerie

    2015-03-01

    Acute administration of the primary psychoactive constituent of cannabis, Δ-9-tetrahydrocannabinol (THC), impairs human facial affect recognition, implicating the endocannabinoid system in emotional processing. Another main constituent of cannabis, cannabidiol (CBD), has seemingly opposite functional effects on the brain. This study aimed to determine the effects of THC and CBD, both alone and in combination on emotional facial affect recognition. 48 volunteers, selected for high and low frequency of cannabis use and schizotypy, were administered, THC (8mg), CBD (16mg), THC+CBD (8mg+16mg) and placebo, by inhalation, in a 4-way, double-blind, placebo-controlled crossover design. They completed an emotional facial affect recognition task including fearful, angry, happy, sad, surprise and disgust faces varying in intensity from 20% to 100%. A visual analogue scale (VAS) of feeling 'stoned' was also completed. In comparison to placebo, CBD improved emotional facial affect recognition at 60% emotional intensity; THC was detrimental to the recognition of ambiguous faces of 40% intensity. The combination of THC+CBD produced no impairment. Relative to placebo, both THC alone and combined THC+CBD equally increased feelings of being 'stoned'. CBD did not influence feelings of 'stoned'. No effects of frequency of use or schizotypy were found. In conclusion, CBD improves recognition of emotional facial affect and attenuates the impairment induced by THC. This is the first human study examining the effects of different cannabinoids on emotional processing. It provides preliminary evidence that different pharmacological agents acting upon the endocannabinoid system can both improve and impair recognition of emotional faces.

  6. Facial recognition of heroin vaccine opiates: type 1 cross-reactivities of antibodies induced by hydrolytically stable haptenic surrogates of heroin, 6-acetylmorphine, and morphine.

    Science.gov (United States)

    Matyas, Gary R; Rice, Kenner C; Cheng, Kejun; Li, Fuying; Antoline, Joshua F G; Iyer, Malliga R; Jacobson, Arthur E; Mayorov, Alexander V; Beck, Zoltan; Torres, Oscar B; Alving, Carl R

    2014-03-14

    Novel synthetic compounds similar to heroin and its major active metabolites, 6-acetylmorphine and morphine, were examined as potential surrogate haptens for the ability to interface with the immune system for a heroin vaccine. Recent studies have suggested that heroin-like haptens must degrade hydrolytically to induce independent immune responses both to heroin and to the metabolites, resulting in antisera containing mixtures of antibodies (type 2 cross-reactivity). To test this concept, two unique hydrolytically stable haptens were created based on presumed structural facial similarities to heroin or to its active metabolites. After conjugation of a heroin-like hapten (DiAmHap) to tetanus toxoid and mixing with liposomes containing monophosphoryl lipid A, high titers of antibodies after two injections in mice had complementary binding sites that exhibited strong type 1 ("true") specific cross-reactivity with heroin and with both of its physiologically active metabolites. Mice immunized with each surrogate hapten exhibited reduced antinociceptive effects caused by injection of heroin. This approach obviates the need to create hydrolytically unstable synthetic heroin-like compounds to induce independent immune responses to heroin and its active metabolites for vaccine development. Facial recognition of hydrolytically stable surrogate haptens by antibodies together with type 1 cross-reactivities with heroin and its metabolites can help to guide synthetic chemical strategies for efficient development of a heroin vaccine.

  7. Developmental Changes in the Primacy of Facial Cues for Emotion Recognition

    Science.gov (United States)

    Leitzke, Brian T.; Pollak, Seth D.

    2016-01-01

    There have been long-standing differences of opinion regarding the influence of the face relative to that of contextual information on how individuals process and judge facial expressions of emotion. However, developmental changes in how individuals use such information have remained largely unexplored and could be informative in attempting to…

  8. Recognition of Emotional and Nonemotional Facial Expressions: A Comparison between Williams Syndrome and Autism

    Science.gov (United States)

    Lacroix, Agnes; Guidetti, Michele; Roge, Bernadette; Reilly, Judy

    2009-01-01

    The aim of our study was to compare two neurodevelopmental disorders (Williams syndrome and autism) in terms of the ability to recognize emotional and nonemotional facial expressions. The comparison of these two disorders is particularly relevant to the investigation of face processing and should contribute to a better understanding of social…

  9. Shy Children Are Less Sensitive to Some Cues to Facial Recognition

    Science.gov (United States)

    Brunet, Paul M.; Mondloch, Catherine J.; Schmidt, Louis A.

    2010-01-01

    Temperamental shyness in children is characterized by avoidance of faces and eye contact, beginning in infancy. We conducted two studies to determine whether temperamental shyness was associated with deficits in sensitivity to some cues to facial identity. In Study 1, 40 typically developing 10-year-old children made same/different judgments about…

  10. Recognition of Facial Expressions of Emotion in Adults with Down Syndrome

    Science.gov (United States)

    Virji-Babul, Naznin; Watt, Kimberley; Nathoo, Farouk; Johnson, Peter

    2012-01-01

    Research on facial expressions in individuals with Down syndrome (DS) has been conducted using photographs. Our goal was to examine the effect of motion on perception of emotional expressions. Adults with DS, adults with typical development matched for chronological age (CA), and children with typical development matched for developmental age (DA)…

  11. Recognition of Emotional and Nonemotional Facial Expressions: A Comparison between Williams Syndrome and Autism

    Science.gov (United States)

    Lacroix, Agnes; Guidetti, Michele; Roge, Bernadette; Reilly, Judy

    2009-01-01

    The aim of our study was to compare two neurodevelopmental disorders (Williams syndrome and autism) in terms of the ability to recognize emotional and nonemotional facial expressions. The comparison of these two disorders is particularly relevant to the investigation of face processing and should contribute to a better understanding of social…

  12. Recognition of Facial Expressions of Emotion in Adults with Down Syndrome

    Science.gov (United States)

    Virji-Babul, Naznin; Watt, Kimberley; Nathoo, Farouk; Johnson, Peter

    2012-01-01

    Research on facial expressions in individuals with Down syndrome (DS) has been conducted using photographs. Our goal was to examine the effect of motion on perception of emotional expressions. Adults with DS, adults with typical development matched for chronological age (CA), and children with typical development matched for developmental age (DA)…

  13. Shy Children Are Less Sensitive to Some Cues to Facial Recognition

    Science.gov (United States)

    Brunet, Paul M.; Mondloch, Catherine J.; Schmidt, Louis A.

    2010-01-01

    Temperamental shyness in children is characterized by avoidance of faces and eye contact, beginning in infancy. We conducted two studies to determine whether temperamental shyness was associated with deficits in sensitivity to some cues to facial identity. In Study 1, 40 typically developing 10-year-old children made same/different judgments about…

  14. Developmental Changes in the Primacy of Facial Cues for Emotion Recognition

    Science.gov (United States)

    Leitzke, Brian T.; Pollak, Seth D.

    2016-01-01

    There have been long-standing differences of opinion regarding the influence of the face relative to that of contextual information on how individuals process and judge facial expressions of emotion. However, developmental changes in how individuals use such information have remained largely unexplored and could be informative in attempting to…

  15. Recognition of Facial Emotions among Maltreated Children with High Rates of Post-Traumatic Stress Disorder

    Science.gov (United States)

    Masten, Carrie L.; Guyer, Amanda E.; Hodgdon, Hilary B.; McClure, Erin B.; Charney, Dennis S.; Ernst, Monique; Kaufman, Joan; Pine, Daniel S.; Monk, Christopher S.

    2008-01-01

    Objective: The purpose of this study is to examine processing of facial emotions in a sample of maltreated children showing high rates of post-traumatic stress disorder (PTSD). Maltreatment during childhood has been associated independently with both atypical processing of emotion and the development of PTSD. However, research has provided little…

  16. 精神分裂症面孔识别受损的研究进展%Research progress on facial recognition deficits of schizophrenia

    Institute of Scientific and Technical Information of China (English)

    徐骁; 谭淑平; 薛明明

    2015-01-01

    Social cognition is the key factor which can have an influence on and predict the functional outcome of schizophrenia.Facial processing and facial expression perception are the two core parts of the social cognitive function.In this review,we discussed facial recognition deficits of schizophrenia from both the emotional face and non-emotional face aspects.Moreover,we reviewed the underlying mechanism of facial recognition deficits in schizophrenia and summarized the latest research progress on facial recognition dysfunction in schizophrenic patients employed the eye movement technology.%社会认知功能是影响和预测精神分裂症功能结局的关键因素。而面孔加工及面孔表情感知是社会认知能力中的核心组成部分。本文从情绪及非情绪面孔识别两个方面对精神分裂症面孔识别的受损展开论述,并进一步探讨了精神分裂症面孔识别受损的认知神经机制及对利用眼动技术对精神分裂症面孔识别功能受损的最新研究进展进行了总结。

  17. Role of fusiform and anterior temporal cortical areas in facial recognition.

    Science.gov (United States)

    Nasr, Shahin; Tootell, Roger B H

    2012-11-15

    Recent fMRI studies suggest that cortical face processing extends well beyond the fusiform face area (FFA), including unspecified portions of the anterior temporal lobe. However, the exact location of such anterior temporal region(s), and their role during active face recognition, remain unclear. Here we demonstrate that (in addition to FFA) a small bilateral site in the anterior tip of the collateral sulcus ('AT'; the anterior temporal face patch) is selectively activated during recognition of faces but not houses (a non-face object). In contrast to the psychophysical prediction that inverted and contrast reversed faces are processed like other non-face objects, both FFA and AT (but not other visual areas) were also activated during recognition of inverted and contrast reversed faces. However, response accuracy was better correlated to recognition-driven activity in AT, compared to FFA. These data support a segregated, hierarchical model of face recognition processing, extending to the anterior temporal cortex.

  18. Research progress of facial expression recognition in children%儿童面部表情识别研究进展

    Institute of Scientific and Technical Information of China (English)

    王道阳; 殷欣

    2015-01-01

    Recognition of facial expressions is an important psychological and social skills, and facial expression recognition disorder has a significant impact on children's interpersonal and social interaction, especially for those with autism spectrum disorder. This paper discusses the research history, development process, influential factors, future development direction and research limitation of facial expression recognition, and describes the enlightenment to education. Besides, the facial expression recognition of children with autism spectrum disorder is also introduced.%对他人面部表情的识别是一种重要的心理能力和社交技巧, 其中面部表情识别障碍严重影响儿童的人际交往和社会互动, 尤其是自闭症谱系障碍的儿童. 主要探讨了面部表情识别的研究历史、 发展进程、 影响因素、 未来的发展方向、 存在的研究局限以及对教育的启示, 并且专门针对自闭症谱系障碍儿童的面部表情识别情况进行了论述.

  19. Arginine vasopressin 1a receptor RS3 promoter microsatellites in schizophrenia: a study of the effect of the "risk" allele on clinical symptoms and facial affect recognition.

    Science.gov (United States)

    Golimbet, Vera; Alfimova, Margarita; Abramova, Lilia; Kaleda, Vasily; Gritsenko, Inga

    2015-02-28

    We studied AVPR1A RS3 polymorphism in schizophrenic patients and controls. AVPR1A RS3 was not associated with schizophrenia. The allele 327bp implicated in autism and social behavior was associated with negative symptoms and tended to be linked to patient facial affect recognition suggesting its impact on schizophrenia social phenotypes.

  20. Spatio-Temporal Pain Recognition in CNN-based Super-Resolved Facial Images

    DEFF Research Database (Denmark)

    Bellantonio, Marco; Haque, Mohammad Ahsanul; Rodriguez, Pau

    2017-01-01

    to pain in each of the facial video frames, temporal axis information regarding to pain expression pattern in a subject video sequence, and variation of face resolution. We employed a combination of convolutional neural network and recurrent neural network to setup a deep hybrid pain detection framework......Automatic pain detection is a long expected solution to a prevalent medical problem of pain management. This is more relevant when the subject of pain is young children or patients with limited ability to communicate about their pain experience. Computer vision-based analysis of facial pain...... expression provides a way of efficient pain detection. When deep machine learning methods came into the scene, automatic pain detection exhibited even better performance. In this paper, we figured out three important factors to exploit in automatic pain detection: spatial information available regarding...

  1. Effects of training set selection on pain recognition via facial expressions

    Science.gov (United States)

    Shier, Warren A.; Yanushkevich, Svetlana N.

    2016-07-01

    This paper presents an approach to pain expression classification based on Gabor energy filters with Support Vector Machines (SVMs), followed by analyzing the effects of training set variations on the systems classification rate. This approach is tested on the UNBC-McMaster Shoulder Pain Archive, which consists of spontaneous pain images, hand labelled using the Prkachin and Solomon Pain Intensity scale. In this paper, the subjects pain intensity level has been quantized into three disjoint groups: no pain, weak pain and strong pain. The results of experiments show that Gabor energy filters with SVMs provide comparable or better results to previous filter- based pain recognition methods, with precision rates of 74%, 30% and 78% for no pain, weak pain and strong pain, respectively. The study of effects of intra-class skew, or changing the number of images per subject, show that both completely removing and over-representing poor quality subjects in the training set has little effect on the overall accuracy of the system. This result suggests that poor quality subjects could be removed from the training set to save offline training time and that SVM is robust not only to outliers in training data, but also to significant amounts of poor quality data mixed into the training sets.

  2. Accuracy and reaction time in recognition of facial emotions in people with multiple sclerosis.

    Science.gov (United States)

    Parada-Fernández, Pamela; Oliva-Macías, Mireia; Amayra, Imanol; López-Paz, Juan F; Lázaro, Esther; Martínez, Óscar; Jometón, Amaia; Berrocoso, Sarah; García de Salazar, Héctor; Pérez, Manuel

    2015-11-16

    Introduccion. La expresion facial emocional constituye una guia basica en la interaccion social y, por lo tanto, las alteraciones en su expresion o reconocimiento implican una limitacion importante para la comunicacion. Por otro lado, el deterioro cognitivo y la presencia de sintomas depresivos, que se encuentran comunmente en los pacientes con esclerosis multiple, no se sabe como influyen en el reconocimiento emocional. Objetivo. Considerar la evaluacion del tiempo de reaccion y precision en la respuesta de reconocimiento de expresiones faciales de las personas afectadas por esclerosis multiple y valorar las posibles variables que pueden modular el reconocimiento de emociones, como la depresion y las funciones cognitivas. Sujetos y metodos. El estudio tiene un diseño no experimental transversal con una sola medicion. La muestra esta compuesta por 85 participantes, 45 con diagnostico de esclerosis multiple y 40 sujetos control. Resultados. Los sujetos con esclerosis multiple revelaban diferencias significativas tanto en el tiempo de reaccion y la precision de respuesta en pruebas neuropsicologicas en comparacion con el grupo control. Se identificaron modelos explicativos en el reconocimiento emocional. Conclusion. Los sujetos con esclerosis multiple se enfrentan a dificultades en el reconocimiento de emociones faciales, y se observaron diferencias en la memoria, atencion, velocidad de procesamiento y sintomatologia depresiva en relacion con el grupo control.

  3. Wanting it Too Much: An Inverse Relation Between Social Motivation and Facial Emotion Recognition in Autism Spectrum Disorder.

    Science.gov (United States)

    Garman, Heather D; Spaulding, Christine J; Webb, Sara Jane; Mikami, Amori Yee; Morris, James P; Lerner, Matthew D

    2016-12-01

    This study examined social motivation and early-stage face perception as frameworks for understanding impairments in facial emotion recognition (FER) in a well-characterized sample of youth with autism spectrum disorders (ASD). Early-stage face perception (N170 event-related potential latency) was recorded while participants completed a standardized FER task, while social motivation was obtained via parent report. Participants with greater social motivation exhibited poorer FER, while those with shorter N170 latencies exhibited better FER for child angry faces stimuli. Social motivation partially mediated the relationship between a faster N170 and better FER. These effects were all robust to variations in IQ, age, and ASD severity. These findings augur against theories implicating social motivation as uniformly valuable for individuals with ASD, and augment models suggesting a close link between early-stage face perception, social motivation, and FER in this population. Broader implications for models and development of FER in ASD are discussed.

  4. Assessment of perception of morphed facial expressions using the Emotion Recognition Task: Normative data from healthy participants aged 8-75

    NARCIS (Netherlands)

    Kessels, R.P.C.; Montagne, B.; Hendriks, A.W.; Perret, D.I.; de Haan, E.H.F.

    2014-01-01

    The ability to recognize and label emotional facial expressions is an important aspect of social cognition. However, existing paradigms to examine this ability present only static facial expressions, suffer from ceiling effects or have limited or no norms. A computerized test, the Emotion Recognitio

  5. Assessment of perception of morphed facial expressions using the Emotion Recognition Task: normative data from healthy participants aged 8-75

    NARCIS (Netherlands)

    Kessels, R.P.C.; Montagne, B.; Hendriks, A.W.; Perrett, D.I.; Haan, E.H. de

    2014-01-01

    The ability to recognize and label emotional facial expressions is an important aspect of social cognition. However, existing paradigms to examine this ability present only static facial expressions, suffer from ceiling effects or have limited or no norms. A computerized test, the Emotion Recognitio

  6. Eye-Gaze Analysis of Facial Emotion Recognition and Expression in Adolescents with ASD.

    Science.gov (United States)

    Wieckowski, Andrea Trubanova; White, Susan W

    2017-01-01

    Impaired emotion recognition and expression in individuals with autism spectrum disorder (ASD) may contribute to observed social impairment. The aim of this study was to examine the role of visual attention directed toward nonsocial aspects of a scene as a possible mechanism underlying recognition and expressive ability deficiency in ASD. One recognition and two expression tasks were administered. Recognition was assessed in force-choice paradigm, and expression was assessed during scripted and free-choice response (in response to emotional stimuli) tasks in youth with ASD (n = 20) and an age-matched sample of typically developing youth (n = 20). During stimulus presentation prior to response in each task, participants' eye gaze was tracked. Youth with ASD were less accurate at identifying disgust and sadness in the recognition task. They fixated less to the eye region of stimuli showing surprise. A group difference was found during the free-choice response task, such that those with ASD expressed emotion less clearly but not during the scripted task. Results suggest altered eye gaze to the mouth region but not the eye region as a candidate mechanism for decreased ability to recognize or express emotion. Findings inform our understanding of the association between social attention and emotion recognition and expression deficits.

  7. FACIAL EXPRESSION RECOGNITION UNDER PARTIAL OCCLUSION%局部遮挡条件下的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    李蕊; 刘鹏宇; 贾克斌

    2016-01-01

    We propose a novel facial expression recognition method,which is based on Gabor filter and gray-level co-occurrence matrix, aimed at facial expression recognition under partial occlusion condition.We first design an approach to extract in blocks the Gabor feature statistics,which generates a low-dimensional Gabor feature vector.Then,taking into account the lack of association between pixels in blocked Gabor features,we introduce the gray-level co-occurrence matrix reflecting the distribution characteristics between locations of pixels into expression recognition field,so as to make up the deficiency caused by Gabor feature blocking processing.Finally,we apply the linear superimposition on the extracted low-dimensional Gabor feature vector and the texture feature of gray-level co-occurrence matrix,after Gaussian normalisation processing there generates a set of low-dimensional feature vectors for feature representation.Experiments have been made on JAFFE and RaFD,they prove that the algorithm has the characteristics of high robustness,low dimension of feature vectors,short classification time and better recognition rates on facial expression recognition in different regions and with different occlusion degrees.%针对局部遮挡条件下的人脸表情识别,提出一种新的基于 Gabor 滤波和灰度共生矩阵的表情识别算法。首先设计一种分块提取 Gabor 特征统计量的方法,生成一个低维 Gabor 特征向量;然后,考虑到分块的 Gabor 特征缺失了像素之间的关联性,将反映像素间位置分布特性的灰度共生矩阵引入到表情识别领域,以此来弥补 Gabor 特征分块处理产生的不足;最后,将提取的低维Gabor特征向量和灰度共生矩阵纹理特征进行线性叠加,高斯归一化后生成一组用于特征表达的低维特征向量。在日本女性人脸表情库和荷兰内梅亨大学人脸数据库上的实验证明该算法对人脸不同区域、不同程度遮挡

  8. Brain functional changes in facial expression recognition in patients with major depressive disorder before and after antidepressant treatment A functional magnetic resonance imaging study

    Institute of Scientific and Technical Information of China (English)

    Wenyan Jiang; Zhongmin Yin; Yixin Pang; Feng Wu; Lingtao Kong; Ke Xu

    2012-01-01

    Functional magnetic resonance imaging was used during emotion recognition to identify changes in functional brain activation in 21 first-episode, treatment-naive major depressive disorder patients before and after antidepressant treatment. Following escitalopram oxalate treatment, patients exhibited decreased activation in bilateral precentral gyrus, bilateral middle frontal gyrus, left middle temporal gyrus, bilateral postcentral gyrus, left cingulate and right parahippocampal gyrus, and increased activation in right superior frontal gyrus, bilateral superior parietal lobule and left occipital gyrus during sad facial expression recognition. After antidepressant treatment, patients also exhibited decreased activation in the bilateral middle frontal gyrus, bilateral cingulate and right parahippocampal gyrus, and increased activation in the right inferior frontal gyrus, left fusiform gyrus and right precuneus during happy facial expression recognition. Our experimental findings indicate that the limbic-cortical network might be a key target region for antidepressant treatment in major depressive disorder.

  9. A Simultaneous Facial Motion Tracking and Expression Recognition Algorithm%一种同步人脸运动跟踪与表情识别算法

    Institute of Scientific and Technical Information of China (English)

    於俊; 汪增福; 李睿

    2015-01-01

    In view of facial expression recognition from monocular video with dynamic background ,a real-time system was proposed based on the algorithm in which facial motion is tracked and facial expression is recognized simultaneously .Firstly ,online appearance model and cylinder head model were combined to track 3D facial motion from video in framework of particle filtering ;secondly ,the static knowledge of facial expression was extracted through facial expression anatomy ;thirdly ,the dynamic knowledge of facial expression was extracted through manifold learning ;fourthly ,facial expression was retrieved by fusing the static knowledge and dynamic knowledge during facial motion tracking process .The experiments results confirmed the advantage on facial expression recognition even in the presence of significant head pose and facial expression variations of this system .%针对单视频动态变化背景下的人脸表情识别问题,提出了一种同步人脸运动跟踪和表情识别算法,并在此基础上构建了一个实时系统。该系统达到了如下目标:首先在粒子滤波框架下结合在线外观模型和柱状几何模型进行人脸三维运动跟踪;接着基于生理知识来提取人脸表情的静态信息;然后基于流形学习来提取人脸表情的动态信息;最后在人脸运动跟踪过程中,结合人脸表情静态信息和动态信息来进行表情识别。实验结果表明,该系统在大姿态和丰富表情下具有较好的综合优势。

  10. Facial emotion recognition in schizophrenia: an event-related potentials study.

    Science.gov (United States)

    Tempesta, Daniela; Stratta, Paolo; Marrelli, Alfonso; Aloisi, Paolo; Arnone, Benedetto; Gasbarri, Antonella; Rossi, Alessandro

    2014-01-01

    Previous studies extensively reported an impaired ability to recognize emotional stimuli in patients with schizophrenia. We used pictures from Ekman and Friesen in an event-related potentials study to investigate the neurophysiological correlates of the fear emotional processing compared with happiness in patients with schizophrenia versus healthy subjects. A significant lower P300 amplitude for fear processing but not for P100, N170 and N250 amplitude was found in schizophrenics compared to controls. These data suggest that the ability of basic visual processing is preserved in schizophrenia, whereas facial affect processing is impaired.

  11. Facial Expression Recognition Techniques Based on Bilinear Model%基于双线性模型的人脸表情识别技术

    Institute of Scientific and Technical Information of China (English)

    徐欢

    2014-01-01

    Aiming at the problems existing in facial expression recognition currently , based on the data in the 3D expression data-base BU-3DFE, we study the point cloud alignment of 3D facial expression data , establish the bilinear models based on the align-ment data , and improve the recognition algorithms based on bilinear model in order to form the new recognition and classification algorithms, to reduce the quantity of identity feature calculation in original algorithm , to minimize the influence of identity feature on the total expression recognition process , to improve the results of facial expression recognition , and to ultimately achieve the high robustness of 3D facial expression recognition .%针对现阶段人脸表情识别过程中所遇到的问题,基于三维数据库BU-3DFE中的三维表情数据,研究三维人脸表情数据的点云对齐及基于对齐数据的双线性模型建立,对基于双线性模型的识别算法加以改进,形成新的识别分类算法,降低原有算法中身份特征参与计算的比重,最大可能地降低身份特征对于整个表情识别过程的影响。旨在提高表情识别的结果,最终实现高鲁棒性的三维表情识别。

  12. Below and beyond the recognition of emotional facial expressions in alcohol dependence: from basic perception to social cognition.

    Science.gov (United States)

    D'Hondt, Fabien; Campanella, Salvatore; Kornreich, Charles; Philippot, Pierre; Maurage, Pierre

    2014-01-01

    Studies that have carried out experimental evaluation of emotional skills in alcohol-dependence have, up to now, been mainly focused on the exploration of emotional facial expressions (EFE) decoding. In the present paper, we provide some complements to the recent systematic literature review published by Donadon and de Lima Osório on this crucial topic. We also suggest research avenues that must be, in our opinion, considered in the coming years. More precisely, we propose, first, that a battery integrating a set of emotional tasks relating to different processes should be developed to better systemize EFE decoding measures in alcohol-dependence. Second, we propose to go below EFE recognition deficits and to seek for the roots of those alterations, particularly by investigating the putative role played by early visual processing and vision-emotion interactions in the emotional impairment observed in alcohol-dependence. Third, we insist on the need to go beyond EFE recognition deficits by suggesting that they only constitute a part of wider emotional deficits in alcohol-dependence. Importantly, since the efficient decoding of emotions is a crucial ability for the development and maintenance of satisfactory interpersonal relationships, we suggest that disruption of this ability in alcohol-dependent individuals may have adverse consequences for their social integration. One way to achieve this research agenda would be to develop the field of affective and social neuroscience of alcohol-dependence, which could ultimately lead to major advances at both theoretical and therapeutic levels.

  13. Application of LBP information of feature-points in facial expression recognition%特征点LBP信息在表情识别中的应用

    Institute of Scientific and Technical Information of China (English)

    刘伟锋; 王延江

    2009-01-01

    提出一种基于特征点LBP信息的表情识别方法.在分析了表情识别中的LBP特征之后,选择含有丰富表情信息的上半脸眼部周围和下半脸嘴部周围的特征点,计算每个特征点邻域的LBP信息作为表情特征进行表情识别.实验表明,基于特征点LBP信息的方法不需要对人脸进行预配准,较传统的LBP特征更有利于表情识别的实现.%An facial expression recognition method is proposed based on the Local Binary Pattern (LBP) of feature-points.First, the LBP feature in facial expression recognition is presented.Then the feature-points around the eyes of upper face and the mouth of lower face is fixed which hold rich expression information.And the LBP map of the neighbor field of each feature point is computed as expression feature for facial expression recognilion.Experimental results show that,the face normalization is not necessary by using the proposed method,which can improve the facial expression recognition.

  14. Face Recognition using 3D Facial Shape and Color Map Information: Comparison and Combination

    CERN Document Server

    Godil, Afzal; Grother, Patrick

    2011-01-01

    In this paper, we investigate the use of 3D surface geometry for face recognition and compare it to one based on color map information. The 3D surface and color map data are from the CAESAR anthropometric database. We find that the recognition performance is not very different between 3D surface and color map information using a principal component analysis algorithm. We also discuss the different techniques for the combination of the 3D surface and color map information for multi-modal recognition by using different fusion approaches and show that there is significant improvement in results. The effectiveness of various techniques is compared and evaluated on a dataset with 200 subjects in two different positions.

  15. Annotation: Development of facial expression recognition from childhood to adolescence: behavioural and neurological perspectives.

    Science.gov (United States)

    Herba, Catherine; Phillips, Mary

    2004-10-01

    Intact emotion processing is critical for normal emotional development. Recent advances in neuroimaging have facilitated the examination of brain development, and have allowed for the exploration of the relationships between the development of emotion processing abilities, and that of associated neural systems. A literature review was performed of published studies examining the development of emotion expression recognition in normal children and psychiatric populations, and of the development of neural systems important for emotion processing. Few studies have explored the development of emotion expression recognition throughout childhood and adolescence. Behavioural studies suggest continued development throughout childhood and adolescence (reflected by accuracy scores and speed of processing), which varies according to the category of emotion displayed. Factors such as sex, socio-economic status, and verbal ability may also affect this development. Functional neuroimaging studies in adults highlight the role of the amygdala in emotion processing. Results of the few neuroimaging studies in children have focused on the role of the amygdala in the recognition of fearful expressions. Although results are inconsistent, they provide evidence throughout childhood and adolescence for the continued development of and sex differences in amygdalar function in response to fearful expressions. Studies exploring emotion expression recognition in psychiatric populations of children and adolescents suggest deficits that are specific to the type of disorder and to the emotion displayed. Results from behavioural and neuroimaging studies indicate continued development of emotion expression recognition and neural regions important for this process throughout childhood and adolescence. Methodological inconsistencies and disparate findings make any conclusion difficult, however. Further studies are required examining the relationship between the development of emotion expression

  16. 基于拓扑知觉理论的人脸表情识别方法%Facial Expression Recognition Method Based on Topological Perception Theory

    Institute of Scientific and Technical Information of China (English)

    王晓峰; 张丽君

    2012-01-01

    On traditional computer visual field, the task is widely considered to be independent bottom-up, this causes low recognition rate of image. This paper proposes the facial expression recognition method based on lhe topology consciousness theory. The method applies the stability of human face topology invariance to abstract the facial outline. And adds the PCA (0 integrate as the facial large extent characterized information, applies large range priority principle to facial expression recognition, and designs lhe RBF+Adaboost classification. Experimental results show this method can improve the rate of facial expression recognition.%摘要:在传统的计算机视觉领域中,底层任务被认为是自主的、自底向上的过程,造成较低的图像识别率,为此,提出一种基于拓扑知觉理论的人脸表情识别方法.该方法把人脸具有拓扑不变性的性质用于人脸拓扑轮廓的提取,将提取的特征与主成分分析相结合,作为人脸大范围特征信息,将大范围优先原理应用于人脸表情的识别算法中,设计RBF+Adaboost多层分类器.实验结果表明,该方法可以提高人脸表情的识别率.

  17. Selective amygdalohippocampectomy versus standard temporal lobectomy in patients with mesiotemporal lobe epilepsy and unilateral hippocampal sclerosis: post-operative facial emotion recognition abilities.

    Science.gov (United States)

    Wendling, Anne-Sophie; Steinhoff, Bernhard J; Bodin, Frédéric; Staack, Anke M; Zentner, Josef; Scholly, Julia; Valenti, Maria-Paula; Schulze-Bonhage, Andreas; Hirsch, Edouard

    2015-03-01

    Surgical treatment of mesial temporal lobe epilepsy (mTLE) patients involves the removal either of the left or the right hippocampus. Since the mesial temporal lobe is responsible for emotion recognition abilities, we aimed to assess facial emotion recognition (FER) in two homogeneous patient cohorts that differed only in the administered surgery design since anterior temporal lobectomy (ATL) or selective amygdalohippocampectomy (SAH) were performed independently of the underlying electroclinical conditions. The patient selection for the two respective surgical procedures was carried out retrospectively between 2000 and 2009 by two independent epilepsy centres, the Kork Epilepsy Centre, Germany and the University Hospital of Strasbourg, France. All included patients had presented with unilateral hippocampus sclerosis (HS) without associated dysplasia or white matter blurring and had become seizure-free postoperatively. Psychometric evaluation was carried out with the Ekman 60 Faces Test and screened for depression and psychosomatic symptoms with the SCL-90 R and the BDI. Thirty healthy volunteers participated as control subjects. Sixty patients were included, 27 had undergone SAH and 33 ATL. Patients and controls obtained comparable scores in FER for surprise, happiness, anger and sadness. Concerning fear and disgust the patient group scored significantly worse. Left-sided operations led to the the most pronounced impairment. The ATL group scored significantly worse for recognition of fear compared with SAH patients. Inversely, after SAH scores for disgust were significantly lower than after ATL, independently of the side of resection. Unilateral temporal damage impairs FER. Different neurosurgical procedures may affect FER differently. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Faciality Enactments, Schools of Recognition and Policies of Difference (In-Itself)

    Science.gov (United States)

    Webb, P. Taylor; Gulson, Kalervo N.

    2015-01-01

    This article discusses the idea of "difference" in relation to "schools of recognition." The analysis is based on a three-year study that mapped the development of the Africentric Alternative School in the Toronto District School Board. Within, we review the concept of difference and juxtapose it with Gilles Deleuze's concept…

  19. Remediation of Deficits in Recognition of Facial Emotions in Children with Autism Spectrum Disorders

    Science.gov (United States)

    Weinger, Paige M.; Depue, Richard A.

    2011-01-01

    This study evaluated the efficacy of the Mind Reading interactive computer software to remediate emotion recognition deficits in children with autism spectrum disorders (ASD). Six unmedicated children with ASD and 11 unmedicated non-clinical control subjects participated in the study. The clinical sample used the software for five sessions. The…

  20. 3D Face Recognition Benchmarks on the Bosphorus Database with Focus on Facial Expressions

    NARCIS (Netherlands)

    N. Alyuz; B. Gökberk; H. Dibeklioğ lu; A. Savran; A.A. Salah (Albert Ali); L. Akarun; B. Sankur

    2008-01-01

    htmlabstractThis paper presents an evaluation of several 3D face recognizers on the Bosphorus database, which was gathered for studies on expression and pose invariant face analysis. We provide identification results of three 3D face recognition algorithms, namely generic face template based ICP

  1. Emotion Recognition in Children and Adolescents with Autism Spectrum Disorders

    Science.gov (United States)

    Kuusikko, Sanna; Haapsamo, Helena; Jansson-Verkasalo, Eira; Hurtig, Tuula; Mattila, Marja-Leena; Ebeling, Hanna; Jussila, Katja; Bolte, Sven; Moilanen, Irma

    2009-01-01

    We examined upper facial basic emotion recognition in 57 subjects with autism spectrum disorders (ASD) (M = 13.5 years) and 33 typically developing controls (M = 14.3 years) by using a standardized computer-aided measure (The Frankfurt Test and Training of Facial Affect Recognition, FEFA). The ASD group scored lower than controls on the total…

  2. Emotion Recognition in Children and Adolescents with Autism Spectrum Disorders

    Science.gov (United States)

    Kuusikko, Sanna; Haapsamo, Helena; Jansson-Verkasalo, Eira; Hurtig, Tuula; Mattila, Marja-Leena; Ebeling, Hanna; Jussila, Katja; Bolte, Sven; Moilanen, Irma

    2009-01-01

    We examined upper facial basic emotion recognition in 57 subjects with autism spectrum disorders (ASD) (M = 13.5 years) and 33 typically developing controls (M = 14.3 years) by using a standardized computer-aided measure (The Frankfurt Test and Training of Facial Affect Recognition, FEFA). The ASD group scored lower than controls on the total…

  3. Are faces processed like words? A diagnostic test for recognition by parts.

    Science.gov (United States)

    Martelli, Marialuisa; Majaj, Najib J; Pelli, Denis G

    2005-02-04

    Do we identify an object as a whole or by its parts? This simple question has been surprisingly hard to answer. It has been suggested that faces are recognized as wholes and words are recognized by parts. Here we answer the question by applying a test for crowding. In crowding, a target is harder to identify in the presence of nearby flankers. Previous work has described crowding between objects. We show that crowding also occurs between the parts of an object. Such internal crowding severely impairs perception, identification, and fMRI face-area activation. We apply a diagnostic test for crowding to a word and a face, and we find that the critical spacing of the parts required for recognition is proportional to distance from fixation and independent of size and kind. The critical spacing defines an isolation field around the target. Some objects can be recognized only when each part is isolated from the rest of the object by the critical spacing. In that case, recognition is by parts. Recognition is holistic if the observer can recognize the object even when the whole object fits within a critical spacing. Such an object has only one part. Multiple parts within an isolation field will crowd each other and spoil recognition. To assess the robustness of the crowding test, we manipulated familiarity through inversion and the face- and word-superiority effects. We find that threshold contrast for word and face identification is the product of two factors: familiarity and crowding. Familiarity increases sensitivity by a factor of x1.5, independent of eccentricity, while crowding attenuates sensitivity more and more as eccentricity increases. Our findings show that observers process words and faces in much the same way: The effects of familiarity and crowding do not distinguish between them. Words and faces are both recognized by parts, and their parts -- letters and facial features -- are recognized holistically. We propose that internal crowding be taken as the

  4. Facial expression recognition based on fuzzy-LDA/CCA%基于模糊LDA/CCA的面部表情识别

    Institute of Scientific and Technical Information of China (English)

    周晓彦; 郑文明; 邹采荣; 赵力

    2008-01-01

    提出了一种新颖的基于典型相关分析(CCA)的模糊判别分析方法(fuzzy-LDA/CCA),并应用于面部表情识别问题.首先为每幅表情图像建立一个相关联的类模糊隶属度矢量,用于表示表情图像与基本表情类别的隶属关系,在此基础上应用CCA方法建立表情图像同表情类别的关系表达式,最后通过对表情图像的类隶属度矢量的估计来实现表情的分类.此外,还将fuzzy-LDA/CCA方法在核空间中进行了非线性推广,从而来解决非线性判别分析的问题.实验证明提出的方法获得了更好的识别效果.%A novel fuzzy linear discriminant analysis method by the canonical correlation analysis (fuzzy-LDA/CCA)is presented and applied to the facial expression recognition. The fuzzy method is used to evaluate the degree of the class membership to which each training sample belongs. CCA is then used to establish the relationship between each facial image and the corresponding class membership vector, and the class membership vector of a test image is estimated using this relationship. Moreover, the fuzzy-LDA/CCA method is also generalized to deal with nonlinear discriminant analysis problems via kernel method. The performance of the proposed method is demonstrated using real data.

  5. Facial Expression Analysis

    NARCIS (Netherlands)

    Pantic, Maja; Li, S.; Jain, A.

    2009-01-01

    Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial compon

  6. Research Progress of 3 D Facial Expression Recognition Technology%三维面部表情识别技术的研究进展

    Institute of Scientific and Technical Information of China (English)

    魏永超; 庄夏; 傅强; 杜冬

    2015-01-01

    三维采集设备的快速发展,极大推动了三维数据技术的研究。其中,以三维人脸数据为载体的三维面部表情识别研究成果不断涌现。三维面部表情识别可以极大克服二维识别中的姿态和光照变化等方面问题。对三维表情识别技术进行了系统概括,尤其针对三维表情的关键技术,即对表情特征提取、表情编码分类及表情数据库进行了总结分析,并提出了三维表情识别的研究建议。三维面部表情识别技术在识别率上基本满足要求,但实时性上需要进一步优化。相关内容对该领域的研究具有指导意义。%The rapid development of three-dimensional(3D) acquisition devices has greatly promoted the researches based on dimensional data and the achievements in 3 D facial expression recognition research is constantly emerging. 3D facial recognition can greatly overcome the gesture and illumination changes and other issues of two-dimensional(2D) recognition. This paper summarizes 3D facial expression recognition technologies with emphasis on analysis of the key technologies of 3D expression,including expression fea-ture extraction,coding and database. It also gives some research suggestions about 3D facial expression rec-ognition. 3D facial expression recognition technology can basically meet the requirements in recognition rate,but its real-time performance needs to be further optimized. The research in this paper has reference value for researchers in the field.

  7. The Researches and Progress of 3D Facial Expression Recognition%三维人脸表情识别研究与进展

    Institute of Scientific and Technical Information of China (English)

    严政; 潘志庚

    2016-01-01

    随着人机交互与情感计算的快速发展,人脸表情识别已经成为研究热点。二维人脸表情图像对于姿态变化与光照变化不具备鲁棒性,为了解决这些问题,研究者们使用三维人脸表情数据进行表情分析研究。本文在前人工作的基础上,对三维人脸表情识别中的对齐与跟踪、表情数据库、特征提取等方面进行综述。指出人脸表情识别的热点与趋势以及存在的局限,并对未来发展进行了展望。%With the fast development of human‐computer interaction ,the recognition of facial expression has already become the research focus .2D facial expression images do not have the robustness of posture change and illumination variation .In order to solve these problems ,researchers use 3D facial expression data to analyze expression .Basing on the prophase work ,this survey summarizes the registration and tracing ,expression data and feature extraction in 3D facial expression recognition .The focus ,trend and limitation of facial expression recognition are put forward ,and the future development is discussed .

  8. Characteristics of facial expression recognition in children with Asperger syndrome%阿斯伯格综合征儿童对人物基本面部表情的识别特点

    Institute of Scientific and Technical Information of China (English)

    郭嘉; 静进; 邹小兵; 唐春

    2011-01-01

    目的:了解阿斯伯格综合征(Asperger syndrome,As)儿童对人物基本面部表情的识别能力和特征.方法:使用本研究研发的面部表情识别测试软件系统对22例符合美国精神障碍诊断和统计手册第四版(the Diagnostic and Statistical Manual of Mental Disorders,4th edition,DSM-Ⅳ)AS诊断标准的门诊AS儿童和20例性别、年龄等一般情况相匹配的正常对照儿童进行测试.以不同呈现方式下的面部表情识别的正确率与反应时为分析指标.结果:AS儿童对正立面部表情、上半部面孔表情的识别正确率均低于正常对照儿童,且反应时间延迟.AS儿童对整体面部表情的识别仅优于下半面孔,而正常儿童对整体面部表情的识别率优于上半和下半面孔.AS儿童与正常儿童对正立(整体)面孔的识别均优于倒立面孔.结论:阿斯伯格综合征儿童对人物面部表情的识别能力较正常儿童差,但尚具有一定的面孔整体加工能力,与正常儿童同样具有倒置面孔效应.%Objective: To explore the ability and characteristics of facial expression recognition in children with Asperger syndrome (AS) . Methods: Twenty-two male children with AS according to the Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-Ⅳ), and twenty normal children matched on chronological age and gender were selected. They were tested with the Facial Expression Recognition Software System developed in this research which took recognition accuracy rate and response time in different presentation manners as analysis indexes. Results: The accuracy rates of upright and the upper facial expression were significantly lower in children with AS than in normal controls [ upright: ( 60. 4 ± 12. 8 ) % vs. ( 73.8 ± 6. 1 ) % , P < 0. 001; upper:( 53.3 ± 13.3 ) % vs. ( 62. 9 ± 8.5 ) % , P= 0. 009] , and the response time was delayed ( P < 0. 05 ) [ upright:(3.494±0.570) svs. (2.839±0.415) s, P<0.001; upper: (4

  9. Facial expression recognition using biologically inspired features and SVM%基于生物启发特征和SVM的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    穆国旺; 王阳; 郭蔚

    2014-01-01

    将C1特征应用于静态图像人脸表情识别,提出了一种新的基于生物启发特征和SVM的表情识别算法。提取人脸图像的C1特征,利用PCA+LDA方法对特征进行降维,用SVM进行分类。在JAFFE和Extended Cohn-Kanade(CK+)人脸表情数据库上的实验结果表明,该算法具有较高的识别率,是一种有效的人脸表情识别方法。%C1 features are introduced to facial expression recognition for static images, and a new algorithm for facial expression recognition based on Biologically Inspired Features(BIFs)and SVM is proposed. C1 features of the facial images are extracted, PCA+LDA method is used to reduce the dimensionality of the C1 features, SVM is used for classifi-cation of the expression. The experiments on the JAFFE and Extended Cohn-Kanade(CK+)facial expression data sets show the effectiveness and the good performance of the algorithm.

  10. Below and beyond the recognition of emotional facial expressions in alcohol dependence: from basic perception to social cognition

    Directory of Open Access Journals (Sweden)

    D’Hondt F

    2014-11-01

    Full Text Available Fabien D’Hondt,1 Salvatore Campanella,2 Charles Kornreich,2 Pierre Philippot,1 Pierre Maurage1 1Laboratory for Experimental Psychopathology, Psychological Sciences Research Institute, Université Catholique de Louvain, Louvain-la-Neuve, Belgium; 2Laboratory of Medical Psychology and Addictology, ULB Neuroscience Institute (UNI, Université Libre de Bruxelles, Brussels, Belgium Abstract: Studies that have carried out experimental evaluation of emotional skills in alcohol-dependence have, up to now, been mainly focused on the exploration of emotional facial expressions (EFE decoding. In the present paper, we provide some complements to the recent systematic literature review published by Donadon and de Lima Osório on this crucial topic. We also suggest research avenues that must be, in our opinion, considered in the coming years. More precisely, we propose, first, that a battery integrating a set of emotional tasks relating to different processes should be developed to better systemize EFE decoding measures in alcohol-dependence. Second, we propose to go below EFE recognition deficits and to seek for the roots of those alterations, particularly by investigating the putative role played by early visual processing and vision–emotion interactions in the emotional impairment observed in alcohol-dependence. Third, we insist on the need to go beyond EFE recognition deficits by suggesting that they only constitute a part of wider emotional deficits in alcohol-dependence. Importantly, since the efficient decoding of emotions is a crucial ability for the development and maintenance of satisfactory interpersonal relationships, we suggest that disruption of this ability in alcohol-dependent individuals may have adverse consequences for their social integration. One way to achieve this research agenda would be to develop the field of affective and social neuroscience of alcohol-dependence, which could ultimately lead to major advances at both theoretical

  11. 基于特征区域自动分割的人脸表情识别%Facial Expression Recognition Based on Feature Regions Automatic Segmentation

    Institute of Scientific and Technical Information of China (English)

    张腾飞; 闵锐; 王保云

    2011-01-01

    针对目前三维人脸表情区域分割方法复杂、费时问题,提出一种人脸表情区域自动分割方法,通过投影、曲率计算的方法检测人脸的部分特征点,以上述特征点为基础进行人脸表情区域的自动分割.为得到更加丰富的表情特征,结合人脸表情识别编码规则对提取到的特征矩阵进行扩充,利用分类器进行人脸表情的识别.通过对三维人脸表情数据库部分样本的识别结果表明,该方法可以取得较高的识别率.%To improve 3D facial expression feature regions segmentation, an automatic feature regions segmentation method is presented.The facial feature points are detected by conducting projection and curvature calculation, and are used as the basis of facial expression feature regions automatic segmentation.To obtain more abundant facial expression information, the Facial Action Coding System(FACS) coding roles is introduced to extend the extracted characteristic matrix.And facial expressions can be recognized by combining classifiers.Experimental results of 3D facial expression samples show that the method is effective with high recognition rate.

  12. Altering sensorimotor feedback disrupts visual discrimination of facial expressions.

    Science.gov (United States)

    Wood, Adrienne; Lupyan, Gary; Sherrin, Steven; Niedenthal, Paula

    2016-08-01

    Looking at another person's facial expression of emotion can trigger the same neural processes involved in producing the expression, and such responses play a functional role in emotion recognition. Disrupting individuals' facial action, for example, interferes with verbal emotion recognition tasks. We tested the hypothesis that facial responses also play a functional role in the perceptual processing of emotional expressions. We altered the facial action of participants with a gel facemask while they performed a task that involved distinguishing target expressions from highly similar distractors. Relative to control participants, participants in the facemask condition demonstrated inferior perceptual discrimination of facial expressions, but not of nonface stimuli. The findings suggest that somatosensory/motor processes involving the face contribute to the visual perceptual-and not just conceptual-processing of facial expressions. More broadly, our study contributes to growing evidence for the fundamentally interactive nature of the perceptual inputs from different sensory modalities.

  13. Efficient spatio-temporal local binary patterns for spontaneous facial micro-expression recognition.

    Directory of Open Access Journals (Sweden)

    Yandan Wang

    Full Text Available Micro-expression recognition is still in the preliminary stage, owing much to the numerous difficulties faced in the development of datasets. Since micro-expression is an important affective clue for clinical diagnosis and deceit analysis, much effort has gone into the creation of these datasets for research purposes. There are currently two publicly available spontaneous micro-expression datasets--SMIC and CASME II, both with baseline results released using the widely used dynamic texture descriptor LBP-TOP for feature extraction. Although LBP-TOP is popular and widely used, it is still not compact enough. In this paper, we draw further inspiration from the concept of LBP-TOP that considers three orthogonal planes by proposing two efficient approaches for feature extraction. The compact robust form described by the proposed LBP-Six Intersection Points (SIP and a super-compact LBP-Three Mean Orthogonal Planes (MOP not only preserves the essential patterns, but also reduces the redundancy that affects the discriminality of the encoded features. Through a comprehensive set of experiments, we demonstrate the strengths of our approaches in terms of recognition accuracy and efficiency.

  14. Recognizing Action Units for Facial Expression Analysis.

    Science.gov (United States)

    Tian, Ying-Li; Kanade, Takeo; Cohn, Jeffrey F

    2001-02-01

    Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams.

  15. Activation of the right fronto-temporal cortex during maternal facial recognition in young infants.

    Science.gov (United States)

    Carlsson, Jakob; Lagercrantz, Hugo; Olson, Linus; Printz, Gordana; Bartocci, Marco

    2008-09-01

    Within the first days of life infants can already recognize their mother. This ability is based on several sensory mechanisms and increases during the first year of life, having its most crucial phase between 6 and 9 months when cortical circuits develop. The underlying cortical structures that are involved in this process are still unknown. Herein we report how the prefrontal cortices of healthy 6- to 9-month-old infants react to the sight of their mother's faces compared to that of an unknown female face. Concentrations of oxygenated haemoglobin [HbO2] and deoxygenated haemoglobin [HHb] were measured using near infrared spectroscopy (NIRS) in both fronto-temporal and occipital areas on the right side during the exposure to maternal and unfamiliar faces. The infants exhibited a distinct and significantly higher activation-related haemodynamic response in the right fronto-temporal cortex following exposure to the image of their mother's face, [HbO2] (0.75 micromol/L, p recognition processes at this age.

  16. Facial Expression Recognition System based on Gabor filter%基于Gabor滤波器的面部表情识别系统

    Institute of Scientific and Technical Information of China (English)

    宋小双

    2016-01-01

    由于缺乏有效的面部表情识别技术,面部表情识别在日常生活中的潜在应用没有得到重视。随着计算机化的盛行,运用计算机的面部识别也逐渐开始盛行。该文以MATLAB为开发工具,对面部表情进行研究。该文选择亚采样和归一化对表情图像原图进行预处理,找到面部特征的位置。然后再使用Gabor小波对预处理图像进行滤波,接着对滤波后的图像算欧式距离,最后使用最近邻方法找出最近的类,识别出表情图像所对应的情绪类型。%Facial expression recognition has potential application in different aspects of day-to-day life not yet realized due to ab-sence of effective expression recognition techniques. With the computerization of the prevalence, the use of the facial recogni-tion has gradually been popular. In this paper, MATLAB as a development tool was used for the study of facial expressions. This paper selected sub-sampling and normalized for original image pre-processing, and then find the location pf facial features. Then it uses the Gabor wavelet image preprocessing filter. The next step is counting the filtered image Euclidean distance. Finally, us-ing the Nearest Neighborhood Classifier method to find the most recent class, identify the face image corresponding type of emo-tion.

  17. Facial Expression Recognition from Video Sequences Based on Spatial-Temporal Motion Local Binary Pattern and Gabor Multiorientation Fusion Histogram

    Directory of Open Access Journals (Sweden)

    Lei Zhao

    2017-01-01

    Full Text Available This paper proposes novel framework for facial expressions analysis using dynamic and static information in video sequences. First, based on incremental formulation, discriminative deformable face alignment method is adapted to locate facial points to correct in-plane head rotation and break up facial region from background. Then, spatial-temporal motion local binary pattern (LBP feature is extracted and integrated with Gabor multiorientation fusion histogram to give descriptors, which reflect static and dynamic texture information of facial expressions. Finally, a one-versus-one strategy based multiclass support vector machine (SVM classifier is applied to classify facial expressions. Experiments on Cohn-Kanade (CK + facial expression dataset illustrate that integrated framework outperforms methods using single descriptors. Compared with other state-of-the-art methods on CK+, MMI, and Oulu-CASIA VIS datasets, our proposed framework performs better.

  18. Exploration on the Relationship between N170 and Facial ;Recognition%N170与面孔识别关系的探讨

    Institute of Scientific and Technical Information of China (English)

    田海鹏

    2015-01-01

    In facial recognition, the brain will generate a negative wave, namely N170, at about 170ms. Whether N170 possesses facial specificity and whether it reflects structure coding or fea-ture coding have always been debated. This paper elaborates the two debates, and proposes the conception of using N170 as the index of other-race effect, hoping to explore the neural mecha-nism of N170 facial recognition.%在对面孔进行识别时,大脑会在170ms左右产生一个负波即N170。N170是否具有面孔特异性,反映的是结构编码还是特征编码,一直都存在争论。文章对这两个争论进行了阐述,并提出将N170作为异族效应指标的设想,期望探讨N170在情绪面孔识别中的神经机制。

  19. Can the usage of human growth hormones affect facial appearance and the accuracy of face recognition systems?

    Science.gov (United States)

    Rose, Jake; Martin, Michael; Bourlai, Thirimachos

    2014-06-01

    In law enforcement and security applications, the acquisition of face images is critical in producing key trace evidence for the successful identification of potential threats. The goal of the study is to demonstrate that steroid usage significantly affects human facial appearance and hence, the performance of commercial and academic face recognition (FR) algorithms. In this work, we evaluate the performance of state-of-the-art FR algorithms on two unique face image datasets of subjects before (gallery set) and after (probe set) steroid (or human growth hormone) usage. For the purpose of this study, datasets of 73 subjects were created from multiple sources found on the Internet, containing images of men and women before and after steroid usage. Next, we geometrically pre-processed all images of both face datasets. Then, we applied image restoration techniques on the same face datasets, and finally, we applied FR algorithms in order to match the pre-processed face images of our probe datasets against the face images of the gallery set. Experimental results demonstrate that only a specific set of FR algorithms obtain the most accurate results (in terms of the rank-1 identification rate). This is because there are several factors that influence the efficiency of face matchers including (i) the time lapse between the before and after image pre-processing and restoration face photos, (ii) the usage of different drugs (e.g. Dianabol, Winstrol, and Decabolan), (iii) the usage of different cameras to capture face images, and finally, (iv) the variability of standoff distance, illumination and other noise factors (e.g. motion noise). All of the previously mentioned complicated scenarios make clear that cross-scenario matching is a very challenging problem and, thus, further investigation is required.

  20. Using Automatic Speech Recognition Technology with Elicited Oral Response Testing

    Science.gov (United States)

    Cox, Troy L.; Davies, Randall S.

    2012-01-01

    This study examined the use of automatic speech recognition (ASR) scored elicited oral response (EOR) tests to assess the speaking ability of English language learners. It also examined the relationship between ASR-scored EOR and other language proficiency measures and the ability of the ASR to rate speakers without bias to gender or native…

  1. Interaction between facial expression and color

    OpenAIRE

    Kae Nakajima; Tetsuto Minami; Shigeki Nakauchi

    2017-01-01

    Facial color varies depending on emotional state, and emotions are often described in relation to facial color. In this study, we investigated whether the recognition of facial expressions was affected by facial color and vice versa. In the facial expression task, expression morph continua were employed: fear-anger and sadness-happiness. The morphed faces were presented in three different facial colors (bluish, neutral, and reddish color). Participants identified a facial expression between t...

  2. The different faces of one's self: an fMRI study into the recognition of current and past self-facial appearances.

    Science.gov (United States)

    Apps, Matthew A J; Tajadura-Jiménez, Ana; Turley, Grainne; Tsakiris, Manos

    2012-11-15

    Mirror self-recognition is often considered as an index of self-awareness. Neuroimaging studies have identified a neural circuit specialised for the recognition of one's own current facial appearance. However, faces change considerably over a lifespan, highlighting the necessity for representations of one's face to continually be updated. We used fMRI to investigate the different neural circuits involved in the recognition of the childhood and current, adult, faces of one's self. Participants viewed images of either their own face as it currently looks morphed with the face of a familiar other or their childhood face morphed with the childhood face of the familiar other. Activity in areas which have a generalised selectivity for faces, including the inferior occipital gyrus, the superior parietal lobule and the inferior temporal gyrus, varied with the amount of current self in an image. Activity in areas involved in memory encoding and retrieval, including the hippocampus and the posterior cingulate gyrus, and areas involved in creating a sense of body ownership, including the temporo-parietal junction and the inferior parietal lobule, varied with the amount of childhood self in an image. We suggest that the recognition of one's own past or present face is underpinned by different cognitive processes in distinct neural circuits. Current self-recognition engages areas involved in perceptual face processing, whereas childhood self-recognition recruits networks involved in body ownership and memory processing.

  3. The different faces of one’s self: an fMRI study into the recognition of current and past self-facial appearances

    Science.gov (United States)

    Apps, Matthew A. J.; Tajadura-Jiménez, Ana; Turley, Grainne; Tsakiris, Manos

    2013-01-01

    Mirror self-recognition is often considered as an index of self-awareness. Neuroimaging studies have identified a neural circuit specialised for the recognition of one’s own current facial appearance. However, faces change considerably over a lifespan, highlighting the necessity for representations of one’s face to continually be updated. We used fMRI to investigate the different neural circuits involved in the recognition of the childhood and current, adult, faces of one’s self. Participants viewed images of either their own face as it currently looks morphed with the face of a familiar other or their childhood face morphed with the childhood face of the familiar other. Activity in areas which have a generalised selectivity for faces, including the inferior occipital gyrus, the superior parietal lobule and the inferior temporal gyrus, varied with the amount of current self in an image. Activity in areas involved in memory encoding and retrieval, including the hippocampus and the posterior cingulate gyrus, and areas involved in creating a sense of body ownership, including the temporo-parietal junction and the inferior parietal lobule, varied with the amount of childhood self in an image. We suggest that the recognition of one’s own past or present face is underpinned by different cognitive processes in distinct neural circuits. Current self-recognition engages areas involved in perceptual face processing, whereas childhood self-recognition recruits networks involved in body ownership and memory processing. PMID:22940117

  4. Pilgrims Face Recognition Dataset -- HUFRD

    OpenAIRE

    Aly, Salah A.

    2012-01-01

    In this work, we define a new pilgrims face recognition dataset, called HUFRD dataset. The new developed dataset presents various pilgrims' images taken from outside the Holy Masjid El-Harram in Makkah during the 2011-2012 Hajj and Umrah seasons. Such dataset will be used to test our developed facial recognition and detection algorithms, as well as assess in the missing and found recognition system \\cite{crowdsensing}.

  5. Facial blindsight

    Directory of Open Access Journals (Sweden)

    Marco eSolcà

    2015-09-01

    Full Text Available Blindsight denotes unconscious residual visual capacities in the context of an inability to consciously recollect or identify visual information. It has been described for color and shape discrimination, movement or facial emotion recognition. The present study investigates a patient suffering from cortical blindness whilst maintaining select residual abilities in face detection. Our patient presented the capacity to distinguish between jumbled/normal faces, known/unknown faces or famous people’s categories although he failed to explicitly recognize or describe them. Conversely, performance was at chance level when asked to categorize non-facial stimuli. Our results provide clinical evidence for the notion that some aspects of facial processing can occur without perceptual awareness, possibly using direct tracts from the thalamus to associative visual cortex, bypassing the primary visual cortex.

  6. Nonspecific Facial Expression Recognition via the Sparse Representation and Weighted%面向非特定表情的加权和稀疏分类方法

    Institute of Scientific and Technical Information of China (English)

    蒋行国; 冯彬; 李志丰

    2014-01-01

    针对在非特定人脸表情识别中,表情纹理特征的利用率不高问题,提出了一种改进的加权局部二值模式(LBP)和稀疏表示相结合的人脸表情识别方法。为了有效利用面部器官的局部纹理信息,采用改进的加权LBP算子提取人脸局部纹理特征,然后用获取的特征值组成训练样本,最后根据稀疏表示理论进行表情分类。在 JAFFE和CK人脸库上的实验结果表明,该方法对非特定人脸表情的识别效果有了明显提高。%On the person-independent facial expression recognition, the utilization rate of the facial expression texture is not high. Facing with the problem of the person-independent face, this paper proposes a method about facial expression recognition based on the improved weighted local binary pattern (LBP) and sparse representation. In order to use the local texture information of the facial organs effectively, first it uses the improved weighted LBP operator to extracting the local texture feature, the extracted features to construct the training samples, and classified via the sparse representation last. Experimental results show a better performance on the JAFFE and CK database.

  7. FACIAL EXPRESSION RECOGNITION BASED ON COMBINATION OF DIFFERENCE IMAGE AND GABOR WAVELET%结合差图像和Gabor小波的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    丁志起; 赵晖

    2011-01-01

    提出一种结合差图像和Gabor小波变换的人脸特征提取方法,并使用支持向量机SVM(Support Vector Machines)进行人脸表情识别.对包含情感信息的静态灰度图像进行预处理,将眼睛和嘴巴等表情子区域从人脸中切割出来,求出其差图像,然后提取差图像的Gabor特征,使用下采样降维减少特征向量的维数并进行归一化,最后使用SVM进行分类.与只从表情子区域提取Ga-bon特征的识别方法进行了比较,结果显示识别效果更好.%In this paper we introduce a facial expression features extraction algorithm which is the combination of difference image and Gabor wavelet transform,and use the support vector machine (SVM) to recognise facial expression. For a given static grey image containing facial expression information,pre-processing is executed first,the expression sub-regions including the eyes and the mouth respectively are cut from the face for obtaining their difference images,then we extract Gabor feature vectors of the difference images, and employ downsampling to reduce the dimensionality of the eigenvectors, and normalise the treated data, finally we use SVM to classify the facial expression. This combination method has been compared with the recognition method which only extracts the. Gabor feature from expression sub-region, the result indicates that the combination one has better recognition performance.

  8. Facial Data Field

    Institute of Scientific and Technical Information of China (English)

    WANG Shuliang; YUAN Hanning; CAO Baohua; WANG Dakui

    2015-01-01

    Expressional face recognition is a challenge in computer vision for complex expressions. Facial data field is proposed to recognize expression. Fundamentals are presented in the methodology of face recognition upon data field and subsequently, technical algorithms including normalizing faces, generating facial data field, extracting feature points in partitions, assigning weights and recog-nizing faces. A case is studied with JAFFE database for its verification. Result indicates that the proposed method is suitable and eff ective in expressional face recognition con-sidering the whole average recognition rate is up to 94.3%. In conclusion, data field is considered as a valuable alter-native to pattern recognition.

  9. 基于多核学习的画像画风的识别%Drawing Style Recognition of Facial Sketch Based on Multiple Kernel Learning

    Institute of Scientific and Technical Information of China (English)

    张铭津; 李洁; 王楠楠

    2015-01-01

    画像的画风识别广泛应用于名画甄别和刑侦破案领域。文中提出基于多核学习的画像画风的识别算法。首先根据艺术评论家从画像部件的处理方式鉴定画像画风的方法,从画像中提取脸、左眼、右眼、鼻和嘴5个部件。然后根据画家从画像的明暗度和画像作者的绘画笔法识别画像画风的方法,从每个部件上提取灰度直方图特征、灰度矩特征、快速鲁棒特征和多尺度的局部二值模式特征。最后通过多核学习将不同部件和不同特征融合以进行画像画风的识别。实验表明,文中算法性能较好,能取得较高识别率。%The drawing style recognition of facial sketches is widely used for painting authentication and criminal investigation. A drawing style recognition algorithm of facial sketch based on multiple kernel learning is presented. Firstly, according to the way of art critics recognize the drawing style of facial sketch, five parts, the face part, left eye part, right eye part, nose part and mouth part, are extracted from the facial sketch. Then, gray histogram feature, gray moment feature, speeded-up robust feature and multiscale local binary pattern feature are extracted from each part on the basis of artistsˊ different understandings of lights and shadows on a face and various usages of the pencil . Finally, different parts and features are integrated and the drawing styles of facial sketches are classified by multiple kernel learning. Experimental results demonstrate that the proposed algorithm has better performance and obtains higher recognition rates.

  10. The Effect of Mozart Music on Child Facial Expression Recognition%莫扎特音乐对幼儿表情识别能力的影响

    Institute of Scientific and Technical Information of China (English)

    王玲; 赵蕾; 卢英俊

    2012-01-01

    本研究考察莫扎特音乐以及不同诱发唤醒度和不同情绪类型的音乐对3-5岁幼儿面部表情(高兴、悲伤和中性表情)识别的影响。结果表明:与同是高唤醒度正性情绪的音乐相比,具有高结构性和周期性的莫扎特音乐反而会对幼儿的表情识别产生干扰;而聆听低唤醒度负性情绪的音乐有利于幼儿大脑达到适当的觉醒水平,进入适当的情绪状态,从而对其表情识别产生促进作用。%We studied 3-5-year-old children's facial expression (happy, sad and neutral) recognition in response to Mozart music as well as to music with different arousal degrees and emotional types. The results showed: compared with music with high arousal degree and positive emotion, Mozart music, with high structural and cyclical features, interfered children's facial expression recognition; while listening to music with low arousal degree and negative emotion helped children's brain reach a proper state for suitable emotion, therefore promoted facial recognition.

  11. A Program Recognition and Auto-Testing Approach

    Directory of Open Access Journals (Sweden)

    Wen C. Pai

    2003-06-01

    Full Text Available The goals of the software testing are to assess and improve the quality of the software. An important problem in software testing is to determine whether a program has been tested enough with a testing criterion. To raise a technology to reconstruct the program structure and generating test data automatically will help software developers to improve software quality efficiently. Program recognition and transformation is a technology that can help maintainers to recover the programs' structure and consequently make software testing properly. In this paper, a methodology to follow the logic of a program and transform to the original program graph is proposed. An approach to derive testing paths automatically for a program to test every blocks of the program is provided. A real example is presented to illustrate and prove that the methodology is practicable. The proposed methodology allows developers to recover the programs' design and makes software maintenance properly.

  12. Below and beyond the recognition of emotional facial expressions in alcohol dependence: from basic perception to social cognition

    National Research Council Canada - National Science Library

    D'Hondt, Fabien; Campanella, Salvatore; Kornreich, Charles; Philippot, Pierre; Maurage, Pierre

    2014-01-01

    Studies that have carried out experimental evaluation of emotional skills in alcohol-dependence have, up to now, been mainly focused on the exploration of emotional facial expressions (EFE) decoding...

  13. 不同时距条件下面孔表情知觉的时间整合效应%Temporal Integration Effects in Facial Expression Recognition in Different Temporal Duration Condition

    Institute of Scientific and Technical Information of China (English)

    陈本友; 黄希庭

    2012-01-01

    sad.In Experiment 1,each part was presented for 17ms,with five intervals of 50,100,200,600,900ms between each part in part -face condition.And a whole face as a baseline condition was presented for 50ms.Three factors were manipulated:inter-stimulus interval(50,100,200,600, 900ms),facial expression category(anger,happy and sad),and presentation orientation(upright vs.inverted).A total of 72 participants were divided into six groups.And each group was randomly assigned to each of the six conditions to be tested individually.Each participant was asked to complete all possible combinations of the different levels of the facial expression category and presentation orientation. In Experiment 2,each participant completed the same task.In addition to the two factors used in Experiment 1,the third factor was changed as each part was presented for the duration of 14,50,100,200ms with the part interval of 50ms,and as the baseline condition was presented for 17ms. The results showed that inversion effects were found at short intervals(50 -200ms) and short presentation durations(14 - 100ms) in the part - face expression condition.And the effects were substantially reduced at long intervals(600 -900ms) and long presentation durations(200ms ).These demonstrated that participants could store temporally separated facial expression in a short - term visual buffer, and integrated them into a single,unified facial expression.Furthermore,the results also showed that the temporal integration performance of the facial expression had significant difference in different facial expression categoryies.Also,happy expression recognition was easier than other expressions. All the results suggested that the temporal integration of the facial expression was influenced by multiple factors,including the temporal structure,such as inter-stimulus interval,stimulus presentation duration,and stimulus feature.It implicates that both iconic memory and long-term memory are possible cognitive processing

  14. Specificity data for the b Test, Dot Counting Test, Rey-15 Item Plus Recognition, and Rey Word Recognition Test in monolingual Spanish-speakers.

    Science.gov (United States)

    Robles, Luz; López, Enrique; Salazar, Xavier; Boone, Kyle B; Glaser, Debra F

    2015-01-01

    The current study provides specificity data on a large sample (n = 115) of young to middle-aged, male, monolingual Spanish speakers of lower educational level and low acculturation to mainstream US culture for four neurocognitive performance validity tests (PVTs): the Dot Counting, the b Test, Rey Word Recognition, and Rey 15-Item Plus Recognition. Individuals with 0 to 6 years of education performed more poorly than did participants with 7 to 10 years of education on several Rey 15-Item scores (combination equation, recall intrusion errors, and recognition false positives), Rey Word Recognition total correct, and E-score and omission errors on the b Test, but no effect of educational level was observed for Dot Counting Test scores. Cutoff scores are provided that maintain approximately 90% specificity for the education subgroups separately. Some of these cutoffs match, or are even more stringent than, those recommended for use in US test takers who are primarily Caucasian, are tested in English, and have a higher educational level (i.e., Rey Word Recognition correct false-positive errors; Rey 15-Item recall intrusions and recognition false-positive errors; b Test total time; and Dot Counting E-score and grouped dot counting time). Thus, performance on these PVT variables in particular appears relatively robust to cultural/language/educational factors.

  15. 一种特征加权融合人脸识别方法%Face recognition by weighted fusion of facial features

    Institute of Scientific and Technical Information of China (English)

    孙劲光; 孟凡宇

    2015-01-01

    针对传统人脸识别算法在非限制条件下识别准确率不高的问题,提出了一种特征加权融合人脸识别方法( DLWF+). 根据人脸面部左眼、右眼、鼻子、嘴、下巴等5个器官位置,将人脸图像划分成5个局部采样区域;将得到的5个局部采样区域和整幅人脸图像分别输入到对应的神经网络中进行网络权值调整,完成子网络的构建;利用softmax回归求出6个相似度向量并组成相似度矩阵与权向量相乘得出最终的识别结果. 经ORL和WFL人脸库上进行实验验证,识别准确率分别达到97%和91.63%. 实验结果表明:该算法能够有效提高人脸识别能力,与传统识别算法相比在限制条件和非限制条件下都具有较高的识别准确率.%The accuracy of face recognition is low under unconstrained conditions. To solve this problem, we pro-pose a new method based on deep learning and the weighted fusion of facial features. First, we divide facial feature points into five regions using an active shape model and then sample different facial components corresponding to those facial feature points. A corresponding deep belief network ( DBN) was then trained based on these regional samples to obtain optimal network parameters. The five regional sampling regions and entire facial image obtained were then inputted into a corresponding neural network to adjust the network weight and complete the construction of sub-networks. Finally, using softmax regression, we obtained six similarity vectors of different components. These six similarity vectors comprise a similarity matrix, which is then multiplied by the weight vector to derive the final recognition result. Recognition accuracy was 97% and 91.63% on the ORL and WFL face databases, respectively. Compared with traditional recognition algorithms such as SVM, DBN, PCA, and FIP+LDA, recognition rates for both databases were improved in both constrained and unconstrained conditions. On the basis of

  16. Facial emotion recognition in bipolar disorder: a critical review Reconhecimento de emoções faciais: artigo de revisão

    Directory of Open Access Journals (Sweden)

    Cristiana Castanho de Almeida Rocca

    2009-06-01

    Full Text Available OBJECTIVE: Literature review of the controlled studies in the last 18 years in emotion recognition deficits in bipolar disorder. METHOD: A bibliographical research of controlled studies with samples larger than 10 participants from 1990 to June 2008 was completed in Medline, Lilacs, PubMed and ISI. Thirty-two papers were evaluated. RESULTS: Euthymic bipolar disorder presented impairment in recognizing disgust and fear. Manic BD showed difficult to recognize fearful and sad faces. Pediatric bipolar disorder patients and children at risk presented impairment in their capacity to recognize emotions in adults and children faces. Bipolar disorder patients were more accurate in recognizing facial emotions than schizophrenic patients. DISCUSSION: Bipolar disorder patients present impaired recognition of disgust, fear and sadness that can be partially attributed to mood-state. In mania, they have difficult to recognize fear and disgust. Bipolar disorder patients were more accurate in recognizing emotions than depressive and schizophrenic patients. Bipolar disorder children present a tendency to misjudge extreme facial expressions as being moderate or mild in intensity. CONCLUSION: Affective and cognitive deficits in bipolar disorder vary according to the mood states. Follow-up studies re-testing bipolar disorder patients after recovery are needed in order to investigate if these abnormalities reflect a state or trait marker and can be considered an endophenotype. Future studies should aim at standardizing task and designs.OBJETIVO: Revisão da literatura de estudos controlados publicados nos últimos 18 anos sobre déficits no reconhecimento de emoções no transtorno bipolar. MÉTODO: Foi realizada uma pesquisa bibliográfica no Medline, Lilacs, PubMed e ISI, selecionando-se o período de 1990 a junho de 2008. Foram incluídos apenas estudos controlados, que tivessem uma das amostras com mais de 10 participantes, totalizando 32 artigos. RESULTADOS

  17. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories.

    Science.gov (United States)

    Wang, Qiandong; Xiao, Naiqi G; Quinn, Paul C; Hu, Chao S; Qian, Miao; Fu, Genyue; Lee, Kang

    2015-02-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese, Caucasian, and racially ambiguous faces. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time.

  18. Facial expression recognition based on Gabor wavelet transform%基于Gabor小波的人脸表情特征提取研究

    Institute of Scientific and Technical Information of China (English)

    王甫龙; 薄华

    2012-01-01

    In order to make the computer have a better recognition to face expression,the method of facial expression recognition based on Gabor wavelets transform is discussed.Firstly,with pre-processing is executed to a given static grey image containing facial expression information.Pre-processing including the identification of pure face facial expression region,size and gray-scale normalized,the methods based on two-dimensional Gabor transform for feature extraction and fastPCA mentioned in this paper for diminishing Gabor feature are discussed.Secondly,in the low dimensional space,use the FLD to obtain the features useful to classification.Finally,SVM is applied to sort the facial expressions.Compared with the conventional methods,experimental results show that this method has fast identification speed and better higher recognition accuracy.%为了使计算机能更好的识别人脸表情,对基于Gabor小波变换的人脸表情识别方法进行了研究。首先对包含表情区域的静态灰度图像进行预处理,包括对确定的人脸表情区域进行尺寸和灰度归一化,然后利用二维Gabor小波变换提取脸部表情特征,使用快速PCA方法对提取的Gabor小波特征初步降维。再在低维的空间中,利用Fisher准则提取那些有利于分类的特征,最后用SVM分类器进行分类。实验结果表明,上述提出的方法比传统的方法识别速度更快,能达到实时性的要求,并且具有很好的鲁棒性,识别率高。

  19. Facial Expression Recognition Based on Gabor Feature and Adaboost%基于Gabor特征和Adaboost的人脸表情识别

    Institute of Scientific and Technical Information of China (English)

    刘燚; 高智勇; 王军

    2011-01-01

    为了改菩人脸表情的识别率,提高分类器的性能,通过提取人脸表情图像的Gabor特征,再结合Adaboost算法,从而进行人脸表情的识别(facial expression recognition,FER).利用Gabor滤波器是人脸表情特征提取的一个重要手段,Adaboost算法则将一系列的弱分类器组合,最终生成一个强分类器.对表情识别这个多类识别问题,采取1:1的办法来解决,总共产生k(k-1)/2(k为总类别数)个强分类器,将多个强分类器进行级联实现人脸表情的多类分类.实验结果表明,相对于其他识别方法如MVBoost算法等,这种方法的识别准确率有很大的提高.%In order to improve the recognition rate of facial expression and enhance the performance of classifier,an approach is proposed to recognize facial expression using Gabor feature combined Adaboost algorithm.Gabor filter is one of the most important methods to extract features, weak classifiers would be constructed by Adaboost algorithm to generate a strong classifier.To solve the multi-class classification problem, we designed classifier by one-to-one mode,so the number of strong classifiers of Adaboost was k(k-1)/2 (k,number of categories).Finally, all strong classifiers were cascaded, Gabor features were feed into these classifiers and facial expression classification can be recognized.Experiment resuks showed that the recognition rate of Gabor plus Adaboost algorithm is significantly higher than that of other methods such as MVBoost algorithm.

  20. Using Computers for Assessment of Facial Features and Recognition of Anatomical Variants that Result in Unfavorable Rhinoplasty Outcomes

    Directory of Open Access Journals (Sweden)

    Tarik Ozkul

    2008-04-01

    Full Text Available Rhinoplasty and facial plastic surgery are among the most frequently performed surgical procedures in the world. Although the underlying anatomical features of nose and face are very well known, performing a successful facial surgery requires not only surgical skills but also aesthetical talent from surgeon. Sculpting facial features surgically in correct proportions to end up with an aesthetically pleasing result is highly difficult. To further complicate the matter, some patients may have some anatomical features which affect rhinoplasty operation outcome negatively. If goes undetected, these anatomical variants jeopardize the surgery causing unexpected rhinoplasty outcomes. In this study, a model is developed with the aid of artificial intelligence tools, which analyses facial features of the patient from photograph, and generates an index of "appropriateness" of the facial features and an index of existence of anatomical variants that effect rhinoplasty negatively. The software tool developed is intended to detect the variants and warn the surgeon before the surgery. Another purpose of the tool is to generate an objective score to assess the outcome of the surgery.