WorldWideScience

Sample records for face emotion recognition

  1. Emotion-independent face recognition

    Science.gov (United States)

    De Silva, Liyanage C.; Esther, Kho G. P.

    2000-12-01

    Current face recognition techniques tend to work well when recognizing faces under small variations in lighting, facial expression and pose, but deteriorate under more extreme conditions. In this paper, a face recognition system to recognize faces of known individuals, despite variations in facial expression due to different emotions, is developed. The eigenface approach is used for feature extraction. Classification methods include Euclidean distance, back propagation neural network and generalized regression neural network. These methods yield 100% recognition accuracy when the training database is representative, containing one image representing the peak expression for each emotion of each person apart from the neutral expression. The feature vectors used for comparison in the Euclidean distance method and for training the neural network must be all the feature vectors of the training set. These results are obtained for a face database consisting of only four persons.

  2. Metacognition of emotional face recognition.

    Science.gov (United States)

    Kelly, Karen J; Metcalfe, Janet

    2011-08-01

    While humans are adept at recognizing emotional states conveyed by facial expressions, the current literature suggests that they lack accurate metacognitions about their performance in this domain. This finding comes from global trait-based questionnaires that assess the extent to which an individual perceives him or herself as empathic, as compared to other people. Those who rate themselves as empathically accurate are no better than others at recognizing emotions. Metacognition of emotion recognition can also be assessed using relative measures that evaluate how well a person thinks s/he has understood the emotion in a particular facial display as compared to other displays. While this is the most common method of metacognitive assessment of people's judgments of learning or their feelings of knowing, this kind of metacognition--"relative meta-accuracy"--has not been studied within the domain of emotion. As well as asking for global metacognitive judgments, we asked people to provide relative, trial-by-trial prospective and retrospective judgments concerning whether they would be right or wrong in recognizing the expressions conveyed in particular facial displays. Our question was: Do people know when they will be correct in knowing what expression is conveyed, and do they know when they do not know? Although we, like others, found that global meta-accuracy was unpredictive of performance, relative meta-accuracy, given by the correlation between participants' trial-by-trial metacognitive judgments and performance on each item, were highly accurate both on the Mind in the Eyes task (Experiment 1) and on the Ekman Emotional Expression Multimorph task (in Experiment 2). 2011 APA, all rights reserved

  3. Serotonergic modulation of face-emotion recognition.

    Science.gov (United States)

    Del-Ben, C M; Ferreira, C A Q; Alves-Neto, W C; Graeff, F G

    2008-04-01

    Facial expressions of basic emotions have been widely used to investigate the neural substrates of emotion processing, but little is known about the exact meaning of subjective changes provoked by perceiving facial expressions. Our assumption was that fearful faces would be related to the processing of potential threats, whereas angry faces would be related to the processing of proximal threats. Experimental studies have suggested that serotonin modulates the brain processes underlying defensive responses to environmental threats, facilitating risk assessment behavior elicited by potential threats and inhibiting fight or flight responses to proximal threats. In order to test these predictions about the relationship between fearful and angry faces and defensive behaviors, we carried out a review of the literature about the effects of pharmacological probes that affect 5-HT-mediated neurotransmission on the perception of emotional faces. The hypothesis that angry faces would be processed as a proximal threat and that, as a consequence, their recognition would be impaired by an increase in 5-HT function was not supported by the results reviewed. In contrast, most of the studies that evaluated the behavioral effects of serotonin challenges showed that increased 5-HT neurotransmission facilitates the recognition of fearful faces, whereas its decrease impairs the same performance. These results agree with the hypothesis that fearful faces are processed as potential threats and that 5-HT enhances this brain processing.

  4. Serotonergic modulation of face-emotion recognition

    Directory of Open Access Journals (Sweden)

    C.M. Del-Ben

    2008-04-01

    Full Text Available Facial expressions of basic emotions have been widely used to investigate the neural substrates of emotion processing, but little is known about the exact meaning of subjective changes provoked by perceiving facial expressions. Our assumption was that fearful faces would be related to the processing of potential threats, whereas angry faces would be related to the processing of proximal threats. Experimental studies have suggested that serotonin modulates the brain processes underlying defensive responses to environmental threats, facilitating risk assessment behavior elicited by potential threats and inhibiting fight or flight responses to proximal threats. In order to test these predictions about the relationship between fearful and angry faces and defensive behaviors, we carried out a review of the literature about the effects of pharmacological probes that affect 5-HT-mediated neurotransmission on the perception of emotional faces. The hypothesis that angry faces would be processed as a proximal threat and that, as a consequence, their recognition would be impaired by an increase in 5-HT function was not supported by the results reviewed. In contrast, most of the studies that evaluated the behavioral effects of serotonin challenges showed that increased 5-HT neurotransmission facilitates the recognition of fearful faces, whereas its decrease impairs the same performance. These results agree with the hypothesis that fearful faces are processed as potential threats and that 5-HT enhances this brain processing.

  5. Recognition of Face and Emotional Facial Expressions in Autism

    Directory of Open Access Journals (Sweden)

    Muhammed Tayyib Kadak

    2013-03-01

    Full Text Available Autism is a genetically transferred neurodevelopmental disorder characterized by severe and permanent deficits in many interpersonal relation areas like communication, social interaction and emotional responsiveness. Patients with autism have deficits in face recognition, eye contact and recognition of emotional expression. Both recognition of face and expression of facial emotion carried on face processing. Structural and functional impairment in fusiform gyrus, amygdala, superior temporal sulcus and other brain regions lead to deficits in recognition of face and facial emotion. Therefore studies suggest that face processing deficits resulted in problems in areas of social interaction and emotion in autism. Studies revealed that children with autism had problems in recognition of facial expression and used mouth region more than eye region. It was also shown that autistic patients interpreted ambiguous expressions as negative emotion. In autism, deficits related in various stages of face processing like detection of gaze, face identity, recognition of emotional expression were determined, so far. Social interaction impairments in autistic spectrum disorders originated from face processing deficits during the periods of infancy, childhood and adolescence. Recognition of face and expression of facial emotion could be affected either automatically by orienting towards faces after birth, or by “learning” processes in developmental periods such as identity and emotion processing. This article aimed to review neurobiological basis of face processing and recognition of emotional facial expressions during normal development and in autism.

  6. Emotional Recognition in Autism Spectrum Conditions from Voices and Faces

    Science.gov (United States)

    Stewart, Mary E.; McAdam, Clair; Ota, Mitsuhiko; Peppe, Sue; Cleland, Joanne

    2013-01-01

    The present study reports on a new vocal emotion recognition task and assesses whether people with autism spectrum conditions (ASC) perform differently from typically developed individuals on tests of emotional identification from both the face and the voice. The new test of vocal emotion contained trials in which the vocal emotion of the sentence…

  7. Neural correlates of recognition memory for emotional faces and scenes

    OpenAIRE

    Keightley, Michelle L.; Chiew, Kimberly S.; Anderson, John A. E.; Grady, Cheryl L.

    2010-01-01

    We examined the influence of emotional valence and type of item to be remembered on brain activity during recognition, using faces and scenes. We used multivariate analyses of event-related fMRI data to identify whole-brain patterns, or networks of activity. Participants demonstrated better recognition for scenes vs faces and for negative vs neutral and positive items. Activity was increased in extrastriate cortex and inferior frontal gyri for emotional scenes, relative to neutral scenes and ...

  8. Neural correlates of recognition memory for emotional faces and scenes.

    Science.gov (United States)

    Keightley, Michelle L; Chiew, Kimberly S; Anderson, John A E; Grady, Cheryl L

    2011-01-01

    We examined the influence of emotional valence and type of item to be remembered on brain activity during recognition, using faces and scenes. We used multivariate analyses of event-related fMRI data to identify whole-brain patterns, or networks of activity. Participants demonstrated better recognition for scenes vs faces and for negative vs neutral and positive items. Activity was increased in extrastriate cortex and inferior frontal gyri for emotional scenes, relative to neutral scenes and all face types. Increased activity in these regions also was seen for negative faces relative to positive faces. Correct recognition of negative faces and scenes (hits vs correct rejections) was associated with increased activity in amygdala, hippocampus, extrastriate, frontal and parietal cortices. Activity specific to correctly recognized emotional faces, but not scenes, was found in sensorimotor areas and rostral prefrontal cortex. These results suggest that emotional valence and type of visual stimulus both modulate brain activity at recognition, and influence multiple networks mediating visual, memory and emotion processing. The contextual information in emotional scenes may facilitate memory via additional visual processing, whereas memory for emotional faces may rely more on cognitive control mediated by rostrolateral prefrontal regions.

  9. A motivational determinant of facial emotion recognition: regulatory focus affects recognition of emotions in faces.

    Science.gov (United States)

    Sassenrath, Claudia; Sassenberg, Kai; Ray, Devin G; Scheiter, Katharina; Jarodzka, Halszka

    2014-01-01

    Two studies examined an unexplored motivational determinant of facial emotion recognition: observer regulatory focus. It was predicted that a promotion focus would enhance facial emotion recognition relative to a prevention focus because the attentional strategies associated with promotion focus enhance performance on well-learned or innate tasks - such as facial emotion recognition. In Study 1, a promotion or a prevention focus was experimentally induced and better facial emotion recognition was observed in a promotion focus compared to a prevention focus. In Study 2, individual differences in chronic regulatory focus were assessed and attention allocation was measured using eye tracking during the facial emotion recognition task. Results indicated that the positive relation between a promotion focus and facial emotion recognition is mediated by shorter fixation duration on the face which reflects a pattern of attention allocation matched to the eager strategy in a promotion focus (i.e., striving to make hits). A prevention focus did not have an impact neither on perceptual processing nor on facial emotion recognition. Taken together, these findings demonstrate important mechanisms and consequences of observer motivational orientation for facial emotion recognition.

  10. Emotion-attention interactions in recognition memory for distractor faces.

    Science.gov (United States)

    Srinivasan, Narayanan; Gupta, Rashmi

    2010-04-01

    Effective filtering of distractor information has been shown to be dependent on perceptual load. Given the salience of emotional information and the presence of emotion-attention interactions, we wanted to explore the recognition memory for emotional distractors especially as a function of focused attention and distributed attention by manipulating load and the spatial spread of attention. We performed two experiments to study emotion-attention interactions by measuring recognition memory performance for distractor neutral and emotional faces. Participants performed a color discrimination task (low-load) or letter identification task (high-load) with a letter string display in Experiment 1 and a high-load letter identification task with letters presented in a circular array in Experiment 2. The stimuli were presented against a distractor face background. The recognition memory results show that happy faces were recognized better than sad faces under conditions of less focused or distributed attention. When attention is more spatially focused, sad faces were recognized better than happy faces. The study provides evidence for emotion-attention interactions in which specific emotional information like sad or happy is associated with focused or distributed attention respectively. Distractor processing with emotional information also has implications for theories of attention.

  11. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias.

    Science.gov (United States)

    Invitto, Sara; Calcagnì, Antonio; Mignozzi, Arianna; Scardino, Rosanna; Piraino, Giulia; Turchi, Daniele; De Feudis, Irio; Brunetti, Antonio; Bevilacqua, Vitoantonio; de Tommaso, Marina

    2017-01-01

    Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians). Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP) and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment). A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.

  12. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias

    Directory of Open Access Journals (Sweden)

    Sara Invitto

    2017-08-01

    Full Text Available Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians. Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment. A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal.

  13. Face Recognition, Musical Appraisal, and Emotional Crossmodal Bias

    Science.gov (United States)

    Invitto, Sara; Calcagnì, Antonio; Mignozzi, Arianna; Scardino, Rosanna; Piraino, Giulia; Turchi, Daniele; De Feudis, Irio; Brunetti, Antonio; Bevilacqua, Vitoantonio; de Tommaso, Marina

    2017-01-01

    Recent research on the crossmodal integration of visual and auditory perception suggests that evaluations of emotional information in one sensory modality may tend toward the emotional value generated in another sensory modality. This implies that the emotions elicited by musical stimuli can influence the perception of emotional stimuli presented in other sensory modalities, through a top-down process. The aim of this work was to investigate how crossmodal perceptual processing influences emotional face recognition and how potential modulation of this processing induced by music could be influenced by the subject's musical competence. We investigated how emotional face recognition processing could be modulated by listening to music and how this modulation varies according to the subjective emotional salience of the music and the listener's musical competence. The sample consisted of 24 participants: 12 professional musicians and 12 university students (non-musicians). Participants performed an emotional go/no-go task whilst listening to music by Albeniz, Chopin, or Mozart. The target stimuli were emotionally neutral facial expressions. We examined the N170 Event-Related Potential (ERP) and behavioral responses (i.e., motor reaction time to target recognition and musical emotional judgment). A linear mixed-effects model and a decision-tree learning technique were applied to N170 amplitudes and latencies. The main findings of the study were that musicians' behavioral responses and N170 is more affected by the emotional value of music administered in the emotional go/no-go task and this bias is also apparent in responses to the non-target emotional face. This suggests that emotional information, coming from multiple sensory channels, activates a crossmodal integration process that depends upon the stimuli emotional salience and the listener's appraisal. PMID:28824392

  14. Emotion recognition: the role of featural and configural face information.

    Science.gov (United States)

    Bombari, Dario; Schmid, Petra C; Schmid Mast, Marianne; Birri, Sandra; Mast, Fred W; Lobmaier, Janek S

    2013-01-01

    Several studies investigated the role of featural and configural information when processing facial identity. A lot less is known about their contribution to emotion recognition. In this study, we addressed this issue by inducing either a featural or a configural processing strategy (Experiment 1) and by investigating the attentional strategies in response to emotional expressions (Experiment 2). In Experiment 1, participants identified emotional expressions in faces that were presented in three different versions (intact, blurred, and scrambled) and in two orientations (upright and inverted). Blurred faces contain mainly configural information, and scrambled faces contain mainly featural information. Inversion is known to selectively hinder configural processing. Analyses of the discriminability measure (A') and response times (RTs) revealed that configural processing plays a more prominent role in expression recognition than featural processing, but their relative contribution varies depending on the emotion. In Experiment 2, we qualified these differences between emotions by investigating the relative importance of specific features by means of eye movements. Participants had to match intact expressions with the emotional cues that preceded the stimulus. The analysis of eye movements confirmed that the recognition of different emotions rely on different types of information. While the mouth is important for the detection of happiness and fear, the eyes are more relevant for anger, fear, and sadness.

  15. Mapping the emotional face. How individual face parts contribute to successful emotion recognition.

    Science.gov (United States)

    Wegrzyn, Martin; Vogt, Maria; Kireclioglu, Berna; Schneider, Julia; Kissler, Johanna

    2017-01-01

    Which facial features allow human observers to successfully recognize expressions of emotion? While the eyes and mouth have been frequently shown to be of high importance, research on facial action units has made more precise predictions about the areas involved in displaying each emotion. The present research investigated on a fine-grained level, which physical features are most relied on when decoding facial expressions. In the experiment, individual faces expressing the basic emotions according to Ekman were hidden behind a mask of 48 tiles, which was sequentially uncovered. Participants were instructed to stop the sequence as soon as they recognized the facial expression and assign it the correct label. For each part of the face, its contribution to successful recognition was computed, allowing to visualize the importance of different face areas for each expression. Overall, observers were mostly relying on the eye and mouth regions when successfully recognizing an emotion. Furthermore, the difference in the importance of eyes and mouth allowed to group the expressions in a continuous space, ranging from sadness and fear (reliance on the eyes) to disgust and happiness (mouth). The face parts with highest diagnostic value for expression identification were typically located in areas corresponding to action units from the facial action coding system. A similarity analysis of the usefulness of different face parts for expression recognition demonstrated that faces cluster according to the emotion they express, rather than by low-level physical features. Also, expressions relying more on the eyes or mouth region were in close proximity in the constructed similarity space. These analyses help to better understand how human observers process expressions of emotion, by delineating the mapping from facial features to psychological representation.

  16. Facial emotion recognition, face scan paths, and face perception in children with neurofibromatosis type 1.

    Science.gov (United States)

    Lewis, Amelia K; Porter, Melanie A; Williams, Tracey A; Bzishvili, Samantha; North, Kathryn N; Payne, Jonathan M

    2017-05-01

    This study aimed to investigate face scan paths and face perception abilities in children with Neurofibromatosis Type 1 (NF1) and how these might relate to emotion recognition abilities in this population. The authors investigated facial emotion recognition, face scan paths, and face perception in 29 children with NF1 compared to 29 chronological age-matched typically developing controls. Correlations between facial emotion recognition, face scan paths, and face perception in children with NF1 were examined. Children with NF1 displayed significantly poorer recognition of fearful expressions compared to controls, as well as a nonsignificant trend toward poorer recognition of anger. Although there was no significant difference between groups in time spent viewing individual core facial features (eyes, nose, mouth, and nonfeature regions), children with NF1 spent significantly less time than controls viewing the face as a whole. Children with NF1 also displayed significantly poorer face perception abilities than typically developing controls. Facial emotion recognition deficits were not significantly associated with aberrant face scan paths or face perception abilities in the NF1 group. These results suggest that impairments in the perception, identification, and interpretation of information from faces are important aspects of the social-cognitive phenotype of NF1. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. The social face of emotion recognition: Evaluations versus stereotypes

    NARCIS (Netherlands)

    Bijlstra, G.; Holland, R.W.; Wigboldus, D.H.J.

    2010-01-01

    The goal of the present paper was to demonstrate the influence of general evaluations and stereotype associations on emotion recognition. Earlier research has shown that evaluative connotations between social category members and emotional expression predict whether recognition of positive or

  18. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    Science.gov (United States)

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  19. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    Science.gov (United States)

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  20. Emotion processing in chimeric faces: hemispheric asymmetries in expression and recognition of emotions.

    Science.gov (United States)

    Indersmitten, Tim; Gur, Ruben C

    2003-05-01

    Since the discovery of facial asymmetries in emotional expressions of humans and other primates, hypotheses have related the greater left-hemiface intensity to right-hemispheric dominance in emotion processing. However, the difficulty of creating true frontal views of facial expressions in two-dimensional photographs has confounded efforts to better understand the phenomenon. We have recently described a method for obtaining three-dimensional photographs of posed and evoked emotional expressions and used these stimuli to investigate both intensity of expression and accuracy of recognizing emotion in chimeric faces constructed from only left- or right-side composites. The participant population included 38 (19 male, 19 female) African-American, Caucasian, and Asian adults. They were presented with chimeric composites generated from faces of eight actors and eight actresses showing four emotions: happiness, sadness, anger, and fear, each in posed and evoked conditions. We replicated the finding that emotions are expressed more intensely in the left hemiface for all emotions and conditions, with the exception of evoked anger, which was expressed more intensely in the right hemiface. In contrast, the results indicated that emotional expressions are recognized more efficiently in the right hemiface, indicating that the right hemiface expresses emotions more accurately. The double dissociation between the laterality of expression intensity and that of recognition efficiency supports the notion that the two kinds of processes may have distinct neural substrates. Evoked anger is uniquely expressed more intensely and accurately on the side of the face that projects to the viewer's right hemisphere, dominant in emotion recognition.

  1. Child's recognition of emotions in robot's face and body

    NARCIS (Netherlands)

    Cohen, I.; Looije, R.; Neerincx, M.A.

    2011-01-01

    Social robots can comfort and support children who have to cope with chronic diseases. In previous studies, a "facial robot", the iCat, proved to show well-recognized emotional expressions that are important in social interactions. The question is if a mobile robot without a face, the Nao, can expre

  2. Body expressions influence recognition of emotions in the face and voice.

    Science.gov (United States)

    Van den Stock, Jan; Righart, Ruthger; de Gelder, Beatrice

    2007-08-01

    The most familiar emotional signals consist of faces, voices, and whole-body expressions, but so far research on emotions expressed by the whole body is sparse. The authors investigated recognition of whole-body expressions of emotion in three experiments. In the first experiment, participants performed a body expression-matching task. Results indicate good recognition of all emotions, with fear being the hardest to recognize. In the second experiment, two alternative forced choice categorizations of the facial expression of a compound face-body stimulus were strongly influenced by the bodily expression. This effect was a function of the ambiguity of the facial expression. In the third experiment, recognition of emotional tone of voice was similarly influenced by task irrelevant emotional body expressions. Taken together, the findings illustrate the importance of emotional whole-body expressions in communication either when viewed on their own or, as is often the case in realistic circumstances, in combination with facial expressions and emotional voices.

  3. Face recognition in emotional scenes: observers remember the eye shape but forget the nose.

    Science.gov (United States)

    Ryan, Kaitlin F; Schwartz, Noah Z

    2013-01-01

    Face recognition is believed to be a highly specialized process that allows individuals to recognize faces faster and more accurately than ordinary objects. However, when faces are viewed in highly emotional contexts, the process becomes slower and less accurate. This suggests a change in recognition strategy compared to recognition in non-arousing contexts. Here we explore this finding by using a novel paradigm to determine which face dimensions are most important for recognizing faces that were initially encoded in highly emotional contexts. Participants were asked to recognize faces from a 3-alternative display after viewing a similar face that was embedded in either a neutral, positive, or negative emotional scene. Results showed that individuals rely on eye shape when recognizing faces that were encoded while embedded in either positive or negative emotional contexts, and ignore nose shape when recognizing faces that were encoded while embedded in negative emotional scenes. The findings suggest that, after encoding that face during heightened emotional arousal, individuals are more likely to commit errors when identifying a face on the basis of nose shape, and less likely to commit errors when identifying a face on the basis of eye shape.

  4. Modified SIFT Descriptors for Face Recognition under Different Emotions

    Directory of Open Access Journals (Sweden)

    Nirvair Neeru

    2016-01-01

    Full Text Available The main goal of this work is to develop a fully automatic face recognition algorithm. Scale Invariant Feature Transform (SIFT has sparingly been used in face recognition. In this paper, a Modified SIFT (MSIFT approach has been proposed to enhance the recognition performance of SIFT. In this paper, the work is done in three steps. First, the smoothing of the image has been done using DWT. Second, the computational complexity of SIFT in descriptor calculation is reduced by subtracting average from each descriptor instead of normalization. Third, the algorithm is made automatic by using Coefficient of Correlation (CoC instead of using the distance ratio (which requires user interaction. The main achievement of this method is reduced database size, as it requires only neutral images to store instead of all the expressions of the same face image. The experiments are performed on the Japanese Female Facial Expression (JAFFE database, which indicates that the proposed approach achieves better performance than SIFT based methods. In addition, it shows robustness against various facial expressions.

  5. Emotional facial expressions differentially influence predictions and performance for face recognition.

    Science.gov (United States)

    Nomi, Jason S; Rhodes, Matthew G; Cleary, Anne M

    2013-01-01

    This study examined how participants' predictions of future memory performance are influenced by emotional facial expressions. Participants made judgements of learning (JOLs) predicting the likelihood that they would correctly identify a face displaying a happy, angry, or neutral emotional expression in a future two-alternative forced-choice recognition test of identity (i.e., recognition that a person's face was seen before). JOLs were higher for studied faces with happy and angry emotional expressions than for neutral faces. However, neutral test faces with studied neutral expressions had significantly higher identity recognition rates than neutral test faces studied with happy or angry expressions. Thus, these data are the first to demonstrate that people believe happy and angry emotional expressions will lead to better identity recognition in the future relative to neutral expressions. This occurred despite the fact that neutral expressions elicited better identity recognition than happy and angry expressions. These findings contribute to the growing literature examining the interaction of cognition and emotion.

  6. Face Processing and Facial Emotion Recognition in Adults with Down Syndrome

    Science.gov (United States)

    Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial

    2008-01-01

    Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…

  7. Face Processing and Facial Emotion Recognition in Adults with Down Syndrome

    Science.gov (United States)

    Barisnikov, Koviljka; Hippolyte, Loyse; Van der Linden, Martial

    2008-01-01

    Face processing and facial expression recognition was investigated in 17 adults with Down syndrome, and results were compared with those of a child control group matched for receptive vocabulary. On the tasks involving faces without emotional content, the adults with Down syndrome performed significantly worse than did the controls. However, their…

  8. No Differences in Emotion Recognition Strategies in Children with Autism Spectrum Disorder: Evidence from Hybrid Faces

    Directory of Open Access Journals (Sweden)

    Kris Evers

    2014-01-01

    Full Text Available Emotion recognition problems are frequently reported in individuals with an autism spectrum disorder (ASD. However, this research area is characterized by inconsistent findings, with atypical emotion processing strategies possibly contributing to existing contradictions. In addition, an attenuated saliency of the eyes region is often demonstrated in ASD during face identity processing. We wanted to compare reliance on mouth versus eyes information in children with and without ASD, using hybrid facial expressions. A group of six-to-eight-year-old boys with ASD and an age- and intelligence-matched typically developing (TD group without intellectual disability performed an emotion labelling task with hybrid facial expressions. Five static expressions were used: one neutral expression and four emotional expressions, namely, anger, fear, happiness, and sadness. Hybrid faces were created, consisting of an emotional face half (upper or lower face region with the other face half showing a neutral expression. Results showed no emotion recognition problem in ASD. Moreover, we provided evidence for the existence of top- and bottom-emotions in children: correct identification of expressions mainly depends on information in the eyes (so-called top-emotions: happiness or in the mouth region (so-called bottom-emotions: sadness, anger, and fear. No stronger reliance on mouth information was found in children with ASD.

  9. [Non-conscious perception of emotional faces affects the visual objects recognition].

    Science.gov (United States)

    Gerasimenko, N Iu; Slavutskaia, A V; Kalinin, S A; Mikhaĭlova, E S

    2013-01-01

    In 34 healthy subjects we have analyzed accuracy and reaction time (RT) during the recognition of complex visual images: pictures of animals and non-living objects. The target stimuli were preceded by brief presentation of masking non-target ones, which represented drawings of emotional (angry, fearful, happy) or neutral faces. We have revealed that in contrast to accuracy the RT depended on the emotional expression of the preceding faces. RT was significantly shorter if the target objects were paired with the angry and fearful faces as compared with the happy and neutral ones. These effects depended on the category of the target stimulus and were more prominent for objects than for animals. Further, the emotional faces' effects were determined by emotional and communication personality traits (defined by Cattell's Questionnaire) and were clearer defined in more sensitive, anxious and pessimistic introverts. The data are important for understanding the mechanisms of human visual behavior determination by non-consciously processing of emotional information.

  10. Effects of acute psychosocial stress on neural activity to emotional and neutral faces in a face recognition memory paradigm.

    Science.gov (United States)

    Li, Shijia; Weerda, Riklef; Milde, Christopher; Wolf, Oliver T; Thiel, Christiane M

    2014-12-01

    Previous studies have shown that acute psychosocial stress impairs recognition of declarative memory and that emotional material is especially sensitive to this effect. Animal studies suggest a central role of the amygdala which modulates memory processes in hippocampus, prefrontal cortex and other brain areas. We used functional magnetic resonance imaging (fMRI) to investigate neural correlates of stress-induced modulation of emotional recognition memory in humans. Twenty-seven healthy, right-handed, non-smoker male volunteers performed an emotional face recognition task. During encoding, participants were presented with 50 fearful and 50 neutral faces. One hour later, they underwent either a stress (Trier Social Stress Test) or a control procedure outside the scanner which was followed immediately by the recognition session inside the scanner, where participants had to discriminate between 100 old and 50 new faces. Stress increased salivary cortisol, blood pressure and pulse, and decreased the mood of participants but did not impact recognition memory. BOLD data during recognition revealed a stress condition by emotion interaction in the left inferior frontal gyrus and right hippocampus which was due to a stress-induced increase of neural activity to fearful and a decrease to neutral faces. Functional connectivity analyses revealed a stress-induced increase in coupling between the right amygdala and the right fusiform gyrus, when processing fearful as compared to neutral faces. Our results provide evidence that acute psychosocial stress affects medial temporal and frontal brain areas differentially for neutral and emotional items, with a stress-induced privileged processing of emotional stimuli.

  11. Emotional Faces in Context: Age Differences in Recognition Accuracy and Scanning Patterns

    Science.gov (United States)

    Noh, Soo Rim; Isaacowitz, Derek M.

    2014-01-01

    While age-related declines in facial expression recognition are well documented, previous research relied mostly on isolated faces devoid of context. We investigated the effects of context on age differences in recognition of facial emotions and in visual scanning patterns of emotional faces. While their eye movements were monitored, younger and older participants viewed facial expressions (i.e., anger, disgust) in contexts that were emotionally congruent, incongruent, or neutral to the facial expression to be identified. Both age groups had highest recognition rates of facial expressions in the congruent context, followed by the neutral context, and recognition rates in the incongruent context were worst. These context effects were more pronounced for older adults. Compared to younger adults, older adults exhibited a greater benefit from congruent contextual information, regardless of facial expression. Context also influenced the pattern of visual scanning characteristics of emotional faces in a similar manner across age groups. In addition, older adults initially attended more to context overall. Our data highlight the importance of considering the role of context in understanding emotion recognition in adulthood. PMID:23163713

  12. Face to face: blocking facial mimicry can selectively impair recognition of emotional expressions.

    Science.gov (United States)

    Oberman, Lindsay M; Winkielman, Piotr; Ramachandran, Vilayanur S

    2007-01-01

    People spontaneously mimic a variety of behaviors, including emotional facial expressions. Embodied cognition theories suggest that mimicry reflects internal simulation of perceived emotion in order to facilitate its understanding. If so, blocking facial mimicry should impair recognition of expressions, especially of emotions that are simulated using facial musculature. The current research tested this hypothesis using four expressions (happy, disgust, fear, and sad) and two mimicry-interfering manipulations (1) biting on a pen and (2) chewing gum, as well as two control conditions. Experiment 1 used electromyography over cheek, mouth, and nose regions. The bite manipulation consistently activated assessed muscles, whereas the chew manipulation activated muscles only intermittently. Further, expressing happiness generated most facial action. Experiment 2 found that the bite manipulation interfered most with recognition of happiness. These findings suggest that facial mimicry differentially contributes to recognition of specific facial expressions, thus allowing for more refined predictions from embodied cognition theories.

  13. Two years after epilepsy surgery in children : Recognition of emotions expressed by faces

    NARCIS (Netherlands)

    Braams, Olga; Meekes, Joost; van Nieuwenhuizen, Onno; Schappin, Renske; van Rijen, Peter C.; Veenstra, Wencke; Braun, Kees; Jennekens-Schinkel, Aag

    2015-01-01

    Objectives: The purpose of this study was to determine whether children with epilepsy surgery in their history are able to recognize emotions expressed by faces and whether this recognition is associated with demographic variables [age, sex, and verbal intelligence (VIQ)] and/or epilepsy variables

  14. Emotion Recognition in Faces and the Use of Visual Context in Young People with High-Functioning Autism Spectrum Disorders

    Science.gov (United States)

    Wright, Barry; Clarke, Natalie; Jordan, Jo; Young, Andrew W.; Clarke, Paula; Miles, Jeremy; Nation, Kate; Clarke, Leesa; Williams, Christine

    2008-01-01

    We compared young people with high-functioning autism spectrum disorders (ASDs) with age, sex and IQ matched controls on emotion recognition of faces and pictorial context. Each participant completed two tests of emotion recognition. The first used Ekman series faces. The second used facial expressions in visual context. A control task involved…

  15. Emotion Recognition in Faces and the Use of Visual Context in Young People with High-Functioning Autism Spectrum Disorders

    Science.gov (United States)

    Wright, Barry; Clarke, Natalie; Jordan, Jo; Young, Andrew W.; Clarke, Paula; Miles, Jeremy; Nation, Kate; Clarke, Leesa; Williams, Christine

    2008-01-01

    We compared young people with high-functioning autism spectrum disorders (ASDs) with age, sex and IQ matched controls on emotion recognition of faces and pictorial context. Each participant completed two tests of emotion recognition. The first used Ekman series faces. The second used facial expressions in visual context. A control task involved…

  16. Emotional face recognition deficit in amnestic patients with mild cognitive impairment: behavioral and electrophysiological evidence

    Directory of Open Access Journals (Sweden)

    Yang L

    2015-08-01

    Full Text Available Linlin Yang, Xiaochuan Zhao, Lan Wang, Lulu Yu, Mei Song, Xueyi Wang Department of Mental Health, The First Hospital of Hebei Medical University, Hebei Medical University Institute of Mental Health, Shijiazhuang, People’s Republic of China Abstract: Amnestic mild cognitive impairment (MCI has been conceptualized as a transitional stage between healthy aging and Alzheimer’s disease. Thus, understanding emotional face recognition deficit in patients with amnestic MCI could be useful in determining progression of amnestic MCI. The purpose of this study was to investigate the features of emotional face processing in amnestic MCI by using event-related potentials (ERPs. Patients with amnestic MCI and healthy controls performed a face recognition task, giving old/new responses to previously studied and novel faces with different emotional messages as the stimulus material. Using the learning-recognition paradigm, the experiments were divided into two steps, ie, a learning phase and a test phase. ERPs were analyzed on electroencephalographic recordings. The behavior data indicated high emotion classification accuracy for patients with amnestic MCI and for healthy controls. The mean percentage of correct classifications was 81.19% for patients with amnestic MCI and 96.46% for controls. Our ERP data suggest that patients with amnestic MCI were still be able to undertake personalizing processing for negative faces, but not for neutral or positive faces, in the early frontal processing stage. In the early time window, no differences in frontal old/new effect were found between patients with amnestic MCI and normal controls. However, in the late time window, the three types of stimuli did not elicit any old/new parietal effects in patients with amnestic MCI, suggesting their recollection was impaired. This impairment may be closely associated with amnestic MCI disease. We conclude from our data that face recognition processing and emotional memory is

  17. Face and Emotion Recognition in MCDD versus PDD-NOS

    Science.gov (United States)

    Herba, Catherine M.; de Bruin, Esther; Althaus, Monika; Verheij, Fop; Ferdinand, Robert F.

    2008-01-01

    Previous studies indicate that Multiple Complex Developmental Disorder (MCDD) children differ from PDD-NOS and autistic children on a symptom level and on psychophysiological functioning. Children with MCDD (n = 21) and PDD-NOS (n = 62) were compared on two facets of social-cognitive functioning: identification of neutral faces and facial…

  18. Emotional face recognition deficit in amnestic patients with mild cognitive impairment: behavioral and electrophysiological evidence

    OpenAIRE

    Yang L; Zhao X; Wang L; Yu L; Song M; Wang X.

    2015-01-01

    Linlin Yang, Xiaochuan Zhao, Lan Wang, Lulu Yu, Mei Song, Xueyi Wang Department of Mental Health, The First Hospital of Hebei Medical University, Hebei Medical University Institute of Mental Health, Shijiazhuang, People’s Republic of China Abstract: Amnestic mild cognitive impairment (MCI) has been conceptualized as a transitional stage between healthy aging and Alzheimer’s disease. Thus, understanding emotional face recognition deficit in patients with amnestic...

  19. Scanning patterns of faces do not explain impaired emotion recognition in Huntington Disease: Evidence for a high level mechanism

    Directory of Open Access Journals (Sweden)

    Marieke evan Asselen

    2012-02-01

    Full Text Available Previous studies in patients with amygdala lesions suggested that deficits in emotion recognition might be mediated by impaired scanning patterns of faces. Here we investigated whether scanning patterns also contribute to the selective impairment in recognition of disgust in Huntington disease (HD. To achieve this goal, we recorded eye movements during a two-alternative forced choice emotion recognition task. HD patients in presymptomatic (n=16 and symptomatic (n=9 disease stages were tested and their performance was compared to a control group (n=22. In our emotion recognition task, participants had to indicate whether a face reflected one of six basic emotions. In addition, and in order to define whether emotion recognition was altered when the participants were forced to look at a specific component of the face, we used a second task where only limited facial information was provided (eyes/mouth in partially masked faces. Behavioural results showed no differences in the ability to recognize emotions between presymptomatic gene carriers and controls. However, an emotion recognition deficit was found for all 6 basic emotion categories in early stage HD. Analysis of eye movement patterns showed that patient and controls used similar scanning strategies. Patterns of deficits were similar regardless of whether parts of the faces were masked or not, thereby confirming that selective attention to particular face parts is not underlying the deficits. These results suggest that the emotion recognition deficits in symptomatic HD patients cannot be explained by impaired scanning patterns of faces. Furthermore, no selective deficit for recognition of disgust was found in presymptomatic HD patients.

  20. Emotional face recognition in adolescent suicide attempters and adolescents engaging in non-suicidal self-injury.

    Science.gov (United States)

    Seymour, Karen E; Jones, Richard N; Cushman, Grace K; Galvan, Thania; Puzia, Megan E; Kim, Kerri L; Spirito, Anthony; Dickstein, Daniel P

    2016-03-01

    Little is known about the bio-behavioral mechanisms underlying and differentiating suicide attempts from non-suicidal self-injury (NSSI) in adolescents. Adolescents who attempt suicide or engage in NSSI often report significant interpersonal and social difficulties. Emotional face recognition ability is a fundamental skill required for successful social interactions, and deficits in this ability may provide insight into the unique brain-behavior interactions underlying suicide attempts versus NSSI in adolescents. Therefore, we examined emotional face recognition ability among three mutually exclusive groups: (1) inpatient adolescents who attempted suicide (SA, n = 30); (2) inpatient adolescents engaged in NSSI (NSSI, n = 30); and (3) typically developing controls (TDC, n = 30) without psychiatric illness. Participants included adolescents aged 13-17 years, matched on age, gender and full-scale IQ. Emotional face recognition was evaluated using the diagnostic assessment of nonverbal accuracy (DANVA-2). Compared to TDC youth, adolescents with NSSI made more errors on child fearful and adult sad face recognition while controlling for psychopathology and medication status (ps Adolescent inpatients engaged in NSSI showed greater deficits in emotional face recognition than TDC, but not inpatient adolescents who attempted suicide. Further results suggest the importance of psychopathology in emotional face recognition. Replication of these preliminary results and examination of the role of context-dependent emotional processing are needed moving forward.

  1. Emotional face recognition deficits and medication effects in pre-manifest through stage-II Huntington's disease.

    Science.gov (United States)

    Labuschagne, Izelle; Jones, Rebecca; Callaghan, Jenny; Whitehead, Daisy; Dumas, Eve M; Say, Miranda J; Hart, Ellen P; Justo, Damian; Coleman, Allison; Dar Santos, Rachelle C; Frost, Chris; Craufurd, David; Tabrizi, Sarah J; Stout, Julie C

    2013-05-15

    Facial emotion recognition impairments have been reported in Huntington's disease (HD). However, the nature of the impairments across the spectrum of HD remains unclear. We report on emotion recognition data from 344 participants comprising premanifest HD (PreHD) and early HD patients, and controls. In a test of recognition of facial emotions, we examined responses to six basic emotional expressions and neutral expressions. In addition, and within the early HD sample, we tested for differences on emotion recognition performance between those 'on' vs. 'off' neuroleptic or selective serotonin reuptake inhibitor (SSRI) medications. The PreHD groups showed significant (precognition, compared to controls, on fearful, angry and surprised faces; whereas the early HD groups were significantly impaired across all emotions including neutral expressions. In early HD, neuroleptic use was associated with worse facial emotion recognition, whereas SSRI use was associated with better facial emotion recognition. The findings suggest that emotion recognition impairments exist across the HD spectrum, but are relatively more widespread in manifest HD than in the premanifest period. Commonly prescribed medications to treat HD-related symptoms also appear to affect emotion recognition. These findings have important implications for interpersonal communication and medication usage in HD.

  2. Recognition memory for low- and high-frequency-filtered emotional faces: Low spatial frequencies drive emotional memory enhancement, whereas high spatial frequencies drive the emotion-induced recognition bias.

    Science.gov (United States)

    Rohr, Michaela; Tröger, Johannes; Michely, Nils; Uhde, Alarith; Wentura, Dirk

    2017-02-17

    This article deals with two well-documented phenomena regarding emotional stimuli: emotional memory enhancement-that is, better long-term memory for emotional than for neutral stimuli-and the emotion-induced recognition bias-that is, a more liberal response criterion for emotional than for neutral stimuli. Studies on visual emotion perception and attention suggest that emotion-related processes can be modulated by means of spatial-frequency filtering of the presented emotional stimuli. Specifically, low spatial frequencies are assumed to play a primary role for the influence of emotion on attention and judgment. Given this theoretical background, we investigated whether spatial-frequency filtering also impacts (1) the memory advantage for emotional faces and (2) the emotion-induced recognition bias, in a series of old/new recognition experiments. Participants completed incidental-learning tasks with high- (HSF) and low- (LSF) spatial-frequency-filtered emotional and neutral faces. The results of the surprise recognition tests showed a clear memory advantage for emotional stimuli. Most importantly, the emotional memory enhancement was significantly larger for face images containing only low-frequency information (LSF faces) than for HSF faces across all experiments, suggesting that LSF information plays a critical role in this effect, whereas the emotion-induced recognition bias was found only for HSF stimuli. We discuss our findings in terms of both the traditional account of different processing pathways for HSF and LSF information and a stimulus features account. The double dissociation in the results favors the latter account-that is, an explanation in terms of differences in the characteristics of HSF and LSF stimuli.

  3. Human Emotion Recognition System

    Directory of Open Access Journals (Sweden)

    Dilbag Singh

    2012-08-01

    Full Text Available This paper discusses the application of feature extraction of facial expressions with combination of neural network for the recognition of different facial emotions (happy, sad, angry, fear, surprised, neutral etc... Humans are capable of producing thousands of facial actions during communication that vary in complexity, intensity, and meaning. This paper analyses the limitations with existing system Emotion recognition using brain activity. In this paper by using an existing simulator I have achieved 97 percent accurate results and it is easy and simplest way than Emotion recognition using brain activity system. Purposed system depends upon human face as we know face also reflects the human brain activities or emotions. In this paper neural network has been used for better results. In the end of paper comparisons of existing Human Emotion Recognition System has been made with new one.

  4. Effects of Training of Affect Recognition on the recognition and visual exploration of emotional faces in schizophrenia.

    Science.gov (United States)

    Drusch, Katharina; Stroth, Sanna; Kamp, Daniel; Frommann, Nicole; Wölwer, Wolfgang

    2014-11-01

    Schizophrenia patients have impairments in facial affect recognition and display scanpath abnormalities during the visual exploration of faces. These abnormalities are characterized by fewer fixations on salient feature areas and longer fixation durations. The present study investigated whether social-cognitive remediation not only improves performance in facial affect recognition but also normalizes patients' gaze behavior while looking at faces. Within a 2 × 2-design (group × time), 16 schizophrenia patients and 16 healthy controls performed a facial affect recognition task with concomitant infrared oculography at baseline (T0) and after six weeks (T1). Between the measurements, patients completed the Training of Affect Recognition (TAR) program. The influence of the training on facial affect recognition (percent of correct answers) and gaze behavior (number and mean duration of fixations into salient or non-salient facial areas) was assessed. In line with former studies, at baseline patients showed poorer facial affect recognition than controls and aberrant scanpaths, and after TAR facial affect recognition was improved. Concomitant with improvements in performance, the number of fixations in feature areas ('mouth') increased while fixations in non-feature areas ('white space') decreased. However, the change in fixation behavior did not correlate with the improvement in performance. After TAR, patients pay more attention to facial areas that contain information about a displayed emotion. Although this may contribute to the improved performance, the lack of a statistical correlation implies that this factor is not sufficient to explain the underlying mechanism of the treatment effect. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Recognition of Immaturity and Emotional Expressions in Blended Faces by Children with Autism and Other Developmental Disabilities

    Science.gov (United States)

    Gross, Thomas F.

    2008-01-01

    The recognition of facial immaturity and emotional expression by children with autism, language disorders, mental retardation, and non-disabled controls was studied in two experiments. Children identified immaturity and expression in upright and inverted faces. The autism group identified fewer immature faces and expressions than control (Exp. 1 &…

  6. Enhancing Emotion Recognition in Children with Autism Spectrum Conditions: An Intervention Using Animated Vehicles with Real Emotional Faces

    Science.gov (United States)

    Golan, Ofer; Ashwin, Emma; Granader, Yael; McClintock, Suzy; Day, Kate; Leggett, Victoria; Baron-Cohen, Simon

    2010-01-01

    This study evaluated "The Transporters", an animated series designed to enhance emotion comprehension in children with autism spectrum conditions (ASC). n = 20 children with ASC (aged 4-7) watched "The Transporters" everyday for 4 weeks. Participants were tested before and after intervention on emotional vocabulary and emotion recognition at three…

  7. Characterization and recognition of mixed emotional expressions in thermal face image

    Science.gov (United States)

    Saha, Priya; Bhattacharjee, Debotosh; De, Barin K.; Nasipuri, Mita

    2016-05-01

    Facial expressions in infrared imaging have been introduced to solve the problem of illumination, which is an integral constituent of visual imagery. The paper investigates facial skin temperature distribution on mixed thermal facial expressions of our created face database where six are basic expressions and rest 12 are a mixture of those basic expressions. Temperature analysis has been performed on three facial regions of interest (ROIs); periorbital, supraorbital and mouth. Temperature variability of the ROIs in different expressions has been measured using statistical parameters. The temperature variation measurement in ROIs of a particular expression corresponds to a vector, which is later used in recognition of mixed facial expressions. Investigations show that facial features in mixed facial expressions can be characterized by positive emotion induced facial features and negative emotion induced facial features. Supraorbital is a useful facial region that can differentiate basic expressions from mixed expressions. Analysis and interpretation of mixed expressions have been conducted with the help of box and whisker plot. Facial region containing mixture of two expressions is generally less temperature inducing than corresponding facial region containing basic expressions.

  8. Emotion Recognition

    Science.gov (United States)

    Neiberg, Daniel; Elenius, Kjell; Burger, Susanne

    Studies of expressive speech have shown that discrete emotions such as anger, fear, joy, and sadness can be accurately communicated, also cross-culturally, and that each emotion is associated with reasonably specific acoustic characteristics [8]. However, most previous research has been conducted on acted emotions. These certainly have something in common with naturally occurring emotions but may also be more intense and prototypical than authentic, everyday expressions [6, 13]. Authentic emotions are, on the other hand, often a combination of different affective states and occur rather infrequently in everyday life.

  9. Emotion recognition from facial expressions: a normative study of the Ekman 60-Faces Test in the Italian population.

    Science.gov (United States)

    Dodich, Alessandra; Cerami, Chiara; Canessa, Nicola; Crespi, Chiara; Marcone, Alessandra; Arpone, Marta; Realmuto, Sabrina; Cappa, Stefano F

    2014-07-01

    The Ekman 60-Faces (EK-60F) Test is a well-known neuropsychological tool assessing emotion recognition from facial expressions. It is the most employed task for research purposes in psychiatric and neurological disorders, including neurodegenerative diseases, such as the behavioral variant of Frontotemporal Dementia (bvFTD). Despite its remarkable usefulness in the social cognition research field, to date, there are still no normative data for the Italian population, thus limiting its application in a clinical context. In this study, we report procedures and normative data for the Italian version of the test. A hundred and thirty-two healthy Italian participants aged between 20 and 79 years with at least 5 years of education were recruited on a voluntary basis. They were administered the EK-60F Test from the Ekman and Friesen series of Pictures of Facial Affect after a preliminary semantic recognition test of the six basic emotions (i.e., anger, fear, sadness, happiness, disgust, surprise). Data were analyzed according to the Capitani procedure [1]. The regression analysis revealed significant effects of demographic variables, with younger, more educated, female subjects showing higher scores. Normative data were then applied to a sample of 15 bvFTD patients which showed global impaired performance in the task, consistently with the clinical condition. We provided EK-60F Test normative data for the Italian population allowing the investigation of global emotion recognition ability as well as selective impairment of basic emotions recognition, both for clinical and research purposes.

  10. Emotion recognition through static faces and moving bodies: a comparison between typically-developed adults and individuals with high level of autistic traits

    Directory of Open Access Journals (Sweden)

    Rossana eActis-Grosso

    2015-10-01

    Full Text Available We investigated whether the type of stimulus (pictures of static faces vs. body motion contributes differently to the recognition of emotions. The performance (accuracy and response times of 25 Low Autistic Traits (LAT group young adults (21 males and 20 young adults (16 males with either High Autistic Traits (HAT group or with High Functioning Autism Spectrum Disorder was compared in the recognition of four emotions (Happiness, Anger, Fear and Sadness either shown in static faces or conveyed by moving bodies (patch-light displays, PLDs. Overall, HAT individuals were as accurate as LAT ones in perceiving emotions both with faces and with PLDs. Moreover, they correctly described non-emotional actions depicted by PLDs, indicating that they perceived the motion conveyed by the PLDs per se. For LAT participants, happiness proved to be the easiest emotion to be recognized: in line with previous studies we found a happy face advantage for faces, which for the first time was also found for bodies (happy body advantage. Furthermore, LAT participants recognized sadness better by static faces and fear by PLDs. This advantage for motion kinematics in the recognition of fear was not present in HAT participants, suggesting that i emotion recognition is not generally impaired in HAT individuals, ii the cues exploited for emotion recognition by LAT and HAT groups are not always the same. These findings are discussed against the background of emotional processing in typically and atypically developed individuals.

  11. Emotion recognition through static faces and moving bodies: a comparison between typically developed adults and individuals with high level of autistic traits

    Science.gov (United States)

    Actis-Grosso, Rossana; Bossi, Francesco; Ricciardelli, Paola

    2015-01-01

    We investigated whether the type of stimulus (pictures of static faces vs. body motion) contributes differently to the recognition of emotions. The performance (accuracy and response times) of 25 Low Autistic Traits (LAT group) young adults (21 males) and 20 young adults (16 males) with either High Autistic Traits or with High Functioning Autism Spectrum Disorder (HAT group) was compared in the recognition of four emotions (Happiness, Anger, Fear, and Sadness) either shown in static faces or conveyed by moving body patch-light displays (PLDs). Overall, HAT individuals were as accurate as LAT ones in perceiving emotions both with faces and with PLDs. Moreover, they correctly described non-emotional actions depicted by PLDs, indicating that they perceived the motion conveyed by the PLDs per se. For LAT participants, happiness proved to be the easiest emotion to be recognized: in line with previous studies we found a happy face advantage for faces, which for the first time was also found for bodies (happy body advantage). Furthermore, LAT participants recognized sadness better by static faces and fear by PLDs. This advantage for motion kinematics in the recognition of fear was not present in HAT participants, suggesting that (i) emotion recognition is not generally impaired in HAT individuals, (ii) the cues exploited for emotion recognition by LAT and HAT groups are not always the same. These findings are discussed against the background of emotional processing in typically and atypically developed individuals. PMID:26557101

  12. Face Processing: Models For Recognition

    Science.gov (United States)

    Turk, Matthew A.; Pentland, Alexander P.

    1990-03-01

    The human ability to process faces is remarkable. We can identify perhaps thousands of faces learned throughout our lifetime and read facial expression to understand such subtle qualities as emotion. These skills are quite robust, despite sometimes large changes in the visual stimulus due to expression, aging, and distractions such as glasses or changes in hairstyle or facial hair. Computers which model and recognize faces will be useful in a variety of applications, including criminal identification, human-computer interface, and animation. We discuss models for representing faces and their applicability to the task of recognition, and present techniques for identifying faces and detecting eye blinks.

  13. The telltale face: possible mechanisms behind defector and cooperator recognition revealed by emotional facial expression metrics.

    Science.gov (United States)

    Kovács-Bálint, Zsófia; Bereczkei, Tamás; Hernádi, István

    2013-11-01

    In this study, we investigated the role of facial cues in cooperator and defector recognition. First, a face image database was constructed from pairs of full face portraits of target subjects taken at the moment of decision-making in a prisoner's dilemma game (PDG) and in a preceding neutral task. Image pairs with no deficiencies (n = 67) were standardized for orientation and luminance. Then, confidence in defector and cooperator recognition was tested with image rating in a different group of lay judges (n = 62). Results indicate that (1) defectors were better recognized (58% vs. 47%), (2) they looked different from cooperators (p facial microexpression analysis, defection was strongly linked with depressed lower lips and less opened eyes. Significant correlation was found between the intensity of micromimics and the rating of images in the cooperator-defector dimension. In summary, facial expressions can be considered as reliable indicators of momentary social dispositions in the PDG. Females may exhibit an evolutionary-based overestimation bias to detecting social visual cues of the defector face.

  14. The Cambridge Mindreading Face-Voice Battery for Children (CAM-C): complex emotion recognition in children with and without autism spectrum conditions.

    Science.gov (United States)

    Golan, Ofer; Sinai-Gavrilov, Yana; Baron-Cohen, Simon

    2015-01-01

    Difficulties in recognizing emotions and mental states are central characteristics of autism spectrum conditions (ASC). However, emotion recognition (ER) studies have focused mostly on recognition of the six 'basic' emotions, usually using still pictures of faces. This study describes a new battery of tasks for testing recognition of nine complex emotions and mental states from video clips of faces and from voice recordings taken from the Mindreading DVD. This battery (the Cambridge Mindreading Face-Voice Battery for Children or CAM-C) was given to 30 high-functioning children with ASC, aged 8 to 11, and to 25 matched controls. The ASC group scored significantly lower than controls on complex ER from faces and voices. In particular, participants with ASC had difficulty with six out of nine complex emotions. Age was positively correlated with all task scores, and verbal IQ was correlated with scores in the voice task. CAM-C scores were negatively correlated with parent-reported level of autism spectrum symptoms. Children with ASC show deficits in recognition of complex emotions and mental states from both facial and vocal expressions. The CAM-C may be a useful test for endophenotypic studies of ASC and is one of the first to use dynamic stimuli as an assay to reveal the ER profile in ASC. It complements the adult version of the CAM Face-Voice Battery, thus providing opportunities for developmental assessment of social cognition in autism.

  15. Handbook of Face Recognition

    CERN Document Server

    Li, Stan Z

    2011-01-01

    This highly anticipated new edition provides a comprehensive account of face recognition research and technology, spanning the full range of topics needed for designing operational face recognition systems. After a thorough introductory chapter, each of the following chapters focus on a specific topic, reviewing background information, up-to-date techniques, and recent results, as well as offering challenges and future directions. Features: fully updated, revised and expanded, covering the entire spectrum of concepts, methods, and algorithms for automated face detection and recognition systems

  16. Famous face recognition, face matching, and extraversion.

    Science.gov (United States)

    Lander, Karen; Poyarekar, Siddhi

    2015-01-01

    It has been previously established that extraverts who are skilled at interpersonal interaction perform significantly better than introverts on a face-specific recognition memory task. In our experiment we further investigate the relationship between extraversion and face recognition, focusing on famous face recognition and face matching. Results indicate that more extraverted individuals perform significantly better on an upright famous face recognition task and show significantly larger face inversion effects. However, our results did not find an effect of extraversion on face matching or inverted famous face recognition.

  17. An audiovisual emotion recognition system

    Science.gov (United States)

    Han, Yi; Wang, Guoyin; Yang, Yong; He, Kun

    2007-12-01

    Human emotions could be expressed by many bio-symbols. Speech and facial expression are two of them. They are both regarded as emotional information which is playing an important role in human-computer interaction. Based on our previous studies on emotion recognition, an audiovisual emotion recognition system is developed and represented in this paper. The system is designed for real-time practice, and is guaranteed by some integrated modules. These modules include speech enhancement for eliminating noises, rapid face detection for locating face from background image, example based shape learning for facial feature alignment, and optical flow based tracking algorithm for facial feature tracking. It is known that irrelevant features and high dimensionality of the data can hurt the performance of classifier. Rough set-based feature selection is a good method for dimension reduction. So 13 speech features out of 37 ones and 10 facial features out of 33 ones are selected to represent emotional information, and 52 audiovisual features are selected due to the synchronization when speech and video fused together. The experiment results have demonstrated that this system performs well in real-time practice and has high recognition rate. Our results also show that the work in multimodules fused recognition will become the trend of emotion recognition in the future.

  18. Multimodal recognition of emotions

    NARCIS (Netherlands)

    Datcu, D.

    2009-01-01

    This thesis proposes algorithms and techniques to be used for automatic recognition of six prototypic emotion categories by computer programs, based on the recognition of facial expressions and emotion patterns in voice. Considering the applicability in real-life conditions, the research is carried

  19. Evaluating music emotion recognition

    DEFF Research Database (Denmark)

    Sturm, Bob L.

    2013-01-01

    A fundamental problem with nearly all work in music genre recognition (MGR)is that evaluation lacks validity with respect to the principal goals of MGR. This problem also occurs in the evaluation of music emotion recognition (MER). Standard approaches to evaluation, though easy to implement, do...... not reliably differentiate between recognizing genre or emotion from music, or by virtue of confounding factors in signals (e.g., equalization). We demonstrate such problems for evaluating an MER system, and conclude with recommendations....

  20. Evaluating faces on trustworthiness: an extension of systems for recognition of emotions signaling approach/avoidance behaviors.

    Science.gov (United States)

    Todorov, Alexander

    2008-03-01

    People routinely make various trait judgments from facial appearance, and such judgments affect important social outcomes. These judgments are highly correlated with each other, reflecting the fact that valence evaluation permeates trait judgments from faces. Trustworthiness judgments best approximate this evaluation, consistent with evidence about the involvement of the amygdala in the implicit evaluation of face trustworthiness. Based on computer modeling and behavioral experiments, I argue that face evaluation is an extension of functionally adaptive systems for understanding the communicative meaning of emotional expressions. Specifically, in the absence of diagnostic emotional cues, trustworthiness judgments are an attempt to infer behavioral intentions signaling approach/avoidance behaviors. Correspondingly, these judgments are derived from facial features that resemble emotional expressions signaling such behaviors: happiness and anger for the positive and negative ends of the trustworthiness continuum, respectively. The emotion overgeneralization hypothesis can explain highly efficient but not necessarily accurate trait judgments from faces, a pattern that appears puzzling from an evolutionary point of view and also generates novel predictions about brain responses to faces. Specifically, this hypothesis predicts a nonlinear response in the amygdala to face trustworthiness, confirmed in functional magnetic resonance imaging (fMRI) studies, and dissociations between processing of facial identity and face evaluation, confirmed in studies with developmental prosopagnosics. I conclude with some methodological implications for the study of face evaluation, focusing on the advantages of formally modeling representation of faces on social dimensions.

  1. FACE RECOGNITION FROM FRONT-VIEW FACE

    Institute of Scientific and Technical Information of China (English)

    WuLifang; ShenLansun

    2003-01-01

    This letter presents a face normalization algorithm based on 2-D face model to rec-ognize faces with variant postures from front-view face.A 2-D face mesh model can be extracted from faces with rotation to left or right and the corresponding front-view mesh model can be estimated according to facial symmetry.Then based on the relationship between the two mesh models,the nrmalized front-view face is formed by gray level mapping.Finally,the face recognition will be finished based on Principal Component Analysis(PCA).Experiments show that better face recognition performance is achieved in this way.

  2. FACE RECOGNITION FROM FRONT-VIEW FACE

    Institute of Scientific and Technical Information of China (English)

    Wu Lifang; Shen Lansun

    2003-01-01

    This letter presents a face normalization algorithm based on 2-D face model to recognize faces with variant postures from front-view face. A 2-D face mesh model can be extracted from faces with rotation to left or right and the corresponding front-view mesh model can be estimated according to the facial symmetry. Then based on the inner relationship between the two mesh models, the normalized front-view face is formed by gray level mapping. Finally, the face recognition will be finished based on Principal Component Analysis (PCA). Experiments show that better face recognition performance is achieved in this way.

  3. Stereotype Associations and Emotion Recognition

    NARCIS (Netherlands)

    Bijlstra, Gijsbert; Holland, Rob W.; Dotsch, Ron; Hugenberg, Kurt; Wigboldus, Daniel H. J.

    We investigated whether stereotype associations between specific emotional expressions and social categories underlie stereotypic emotion recognition biases. Across two studies, we replicated previously documented stereotype biases in emotion recognition using both dynamic (Study 1) and static

  4. Computer-Assisted Face Processing Instruction Improves Emotion Recognition, Mentalizing, and Social Skills in Students with ASD

    Science.gov (United States)

    Rice, Linda Marie; Wall, Carla Anne; Fogel, Adam; Shic, Frederick

    2015-01-01

    This study examined the extent to which a computer-based social skills intervention called "FaceSay"™ was associated with improvements in affect recognition, mentalizing, and social skills of school-aged children with Autism Spectrum Disorder (ASD). "FaceSay"™ offers students simulated practice with eye gaze, joint attention,…

  5. Face and Emotion Recognition on Commercial Property under EU Data Protection Law

    DEFF Research Database (Denmark)

    Lewinski, Peter; Trzaskowski, Jan; Luzak, Joasia

    2016-01-01

    This paper integrates and cuts through domains of privacy law and biometrics. Specifically, this paper presents a legal analysis on the use of Automated Facial Recognition Systems (the AFRS) in commercial (retail store) settings within the European Union data protection framework. The AFRS...... to the technology's potential of becoming a substantial privacy issue. First, this paper introduces the AFRS and EU data protection law. This is followed by an analysis of European Data protection law and its application in relation to the use of the AFRS, including requirements concerning data quality...... and legitimate processing of personal data, which, finally, leads to an overview of measures that traders can take to comply with data protection law, including by means of information, consent, and anonymization....

  6. Forensic Face Recognition: A Survey

    NARCIS (Netherlands)

    Ali, Tauseef; Spreeuwers, Luuk; Veldhuis, Raymond; Quaglia, Adamo; Epifano, Calogera M.

    2012-01-01

    The improvements of automatic face recognition during the last 2 decades have disclosed new applications like border control and camera surveillance. A new application field is forensic face recognition. Traditionally, face recognition by human experts has been used in forensics, but now there is a

  7. Cognitive aging explains age-related differences in face-based recognition of basic emotions except for anger and disgust.

    Science.gov (United States)

    Suzuki, Atsunobu; Akiyama, Hiroko

    2013-01-01

    This study aimed at a detailed understanding of the possible dissociable influences of cognitive aging on the recognition of facial expressions of basic emotions (happiness, surprise, fear, anger, disgust, and sadness). The participants were 36 older and 36 young adults. They viewed 96 pictures of facial expressions and were asked to choose one emotion that best described each. Four cognitive tasks measuring the speed of processing and fluid intelligence were also administered, the scores of which were used to compute a composite measure of general cognitive ability. A series of hierarchical regression analyses revealed that age-related deficits in identifying happiness, surprise, fear, and sadness were statistically explained by general cognitive ability, while the differences in anger and disgust were not. This provides clear evidence that age-related cognitive impairment remarkably and differentially affects the recognition of basic emotions, contrary to the common view that cognitive aging has a uniformly minor effect.

  8. Computer-Assisted Face Processing Instruction Improves Emotion Recognition, Mentalizing, and Social Skills in Students with ASD.

    Science.gov (United States)

    Rice, Linda Marie; Wall, Carla Anne; Fogel, Adam; Shic, Frederick

    2015-07-01

    This study examined the extent to which a computer-based social skills intervention called FaceSay was associated with improvements in affect recognition, mentalizing, and social skills of school-aged children with Autism Spectrum Disorder (ASD). FaceSay offers students simulated practice with eye gaze, joint attention, and facial recognition skills. This randomized control trial included school-aged children meeting educational criteria for autism (N = 31). Results demonstrated that participants who received the intervention improved their affect recognition and mentalizing skills, as well as their social skills. These findings suggest that, by targeting face-processing skills, computer-based interventions may produce changes in broader cognitive and social-skills domains in a cost- and time-efficient manner.

  9. Does cortisol modulate emotion recognition and empathy?

    Science.gov (United States)

    Duesenberg, Moritz; Weber, Juliane; Schulze, Lars; Schaeuffele, Carmen; Roepke, Stefan; Hellmann-Regen, Julian; Otte, Christian; Wingenfeld, Katja

    2016-04-01

    Emotion recognition and empathy are important aspects in the interaction and understanding of other people's behaviors and feelings. The Human environment comprises of stressful situations that impact social interactions on a daily basis. Aim of the study was to examine the effects of the stress hormone cortisol on emotion recognition and empathy. In this placebo-controlled study, 40 healthy men and 40 healthy women (mean age 24.5 years) received either 10mg of hydrocortisone or placebo. We used the Multifaceted Empathy Test to measure emotional and cognitive empathy. Furthermore, we examined emotion recognition from facial expressions, which contained two emotions (anger and sadness) and two emotion intensities (40% and 80%). We did not find a main effect for treatment or sex on either empathy or emotion recognition but a sex × emotion interaction on emotion recognition. The main result was a four-way-interaction on emotion recognition including treatment, sex, emotion and task difficulty. At 40% task difficulty, women recognized angry faces better than men in the placebo condition. Furthermore, in the placebo condition, men recognized sadness better than anger. At 80% task difficulty, men and women performed equally well in recognizing sad faces but men performed worse compared to women with regard to angry faces. Apparently, our results did not support the hypothesis that increases in cortisol concentration alone influence empathy and emotion recognition in healthy young individuals. However, sex and task difficulty appear to be important variables in emotion recognition from facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. The hierarchical brain network for face recognition.

    Science.gov (United States)

    Zhen, Zonglei; Fang, Huizhen; Liu, Jia

    2013-01-01

    Numerous functional magnetic resonance imaging (fMRI) studies have identified multiple cortical regions that are involved in face processing in the human brain. However, few studies have characterized the face-processing network as a functioning whole. In this study, we used fMRI to identify face-selective regions in the entire brain and then explore the hierarchical structure of the face-processing network by analyzing functional connectivity among these regions. We identified twenty-five regions mainly in the occipital, temporal and frontal cortex that showed a reliable response selective to faces (versus objects) across participants and across scan sessions. Furthermore, these regions were clustered into three relatively independent sub-networks in a face-recognition task on the basis of the strength of functional connectivity among them. The functionality of the sub-networks likely corresponds to the recognition of individual identity, retrieval of semantic knowledge and representation of emotional information. Interestingly, when the task was switched to object recognition from face recognition, the functional connectivity between the inferior occipital gyrus and the rest of the face-selective regions were significantly reduced, suggesting that this region may serve as an entry node in the face-processing network. In sum, our study provides empirical evidence for cognitive and neural models of face recognition and helps elucidate the neural mechanisms underlying face recognition at the network level.

  11. The influence of combined cognitive plus social-cognitive training on amygdala response during face emotion recognition in schizophrenia.

    Science.gov (United States)

    Hooker, Christine I; Bruce, Lori; Fisher, Melissa; Verosky, Sara C; Miyakawa, Asako; D'Esposito, Mark; Vinogradov, Sophia

    2013-08-30

    Both cognitive and social-cognitive deficits impact functional outcome in schizophrenia. Cognitive remediation studies indicate that targeted cognitive and/or social-cognitive training improves behavioral performance on trained skills. However, the neural effects of training in schizophrenia and their relation to behavioral gains are largely unknown. This study tested whether a 50-h intervention which included both cognitive and social-cognitive training would influence neural mechanisms that support social ccognition. Schizophrenia participants completed a computer-based intervention of either auditory-based cognitive training (AT) plus social-cognition training (SCT) (N=11) or non-specific computer games (CG) (N=11). Assessments included a functional magnetic resonance imaging (fMRI) task of facial emotion recognition, and behavioral measures of cognition, social cognition, and functional outcome. The fMRI results showed the predicted group-by-time interaction. Results were strongest for emotion recognition of happy, surprise and fear: relative to CG participants, AT+SCT participants showed a neural activity increase in bilateral amygdala, right putamen and right medial prefrontal cortex. Across all participants, pre-to-post intervention neural activity increase in these regions predicted behavioral improvement on an independent emotion perception measure (MSCEIT: Perceiving Emotions). Among AT+SCT participants alone, neural activity increase in right amygdala predicted behavioral improvement in emotion perception. The findings indicate that combined cognition and social-cognition training improves neural systems that support social-cognition skills.

  12. Adjunctive selective estrogen receptor modulator increases neural activity in the hippocampus and inferior frontal gyrus during emotional face recognition in schizophrenia.

    Science.gov (United States)

    Ji, E; Weickert, C S; Lenroot, R; Kindler, J; Skilleter, A J; Vercammen, A; White, C; Gur, R E; Weickert, T W

    2016-05-03

    Estrogen has been implicated in the development and course of schizophrenia with most evidence suggesting a neuroprotective effect. Treatment with raloxifene, a selective estrogen receptor modulator, can reduce symptom severity, improve cognition and normalize brain activity during learning in schizophrenia. People with schizophrenia are especially impaired in the identification of negative facial emotions. The present study was designed to determine the extent to which adjunctive raloxifene treatment would alter abnormal neural activity during angry facial emotion recognition in schizophrenia. Twenty people with schizophrenia (12 men, 8 women) participated in a 13-week, randomized, double-blind, placebo-controlled, crossover trial of adjunctive raloxifene treatment (120 mg per day orally) and performed a facial emotion recognition task during functional magnetic resonance imaging after each treatment phase. Two-sample t-tests in regions of interest selected a priori were performed to assess activation differences between raloxifene and placebo conditions during the recognition of angry faces. Adjunctive raloxifene significantly increased activation in the right hippocampus and left inferior frontal gyrus compared with the placebo condition (family-wise error, Pschizophrenia. These findings support the hypothesis that estrogen plays a modifying role in schizophrenia and shows that adjunctive raloxifene treatment may reverse abnormal neural activity during facial emotion recognition, which is relevant to impaired social functioning in men and women with schizophrenia.

  13. Adjunctive selective estrogen receptor modulator increases neural activity in the hippocampus and inferior frontal gyrus during emotional face recognition in schizophrenia

    Science.gov (United States)

    Ji, E; Weickert, C S; Lenroot, R; Kindler, J; Skilleter, A J; Vercammen, A; White, C; Gur, R E; Weickert, T W

    2016-01-01

    Estrogen has been implicated in the development and course of schizophrenia with most evidence suggesting a neuroprotective effect. Treatment with raloxifene, a selective estrogen receptor modulator, can reduce symptom severity, improve cognition and normalize brain activity during learning in schizophrenia. People with schizophrenia are especially impaired in the identification of negative facial emotions. The present study was designed to determine the extent to which adjunctive raloxifene treatment would alter abnormal neural activity during angry facial emotion recognition in schizophrenia. Twenty people with schizophrenia (12 men, 8 women) participated in a 13-week, randomized, double-blind, placebo-controlled, crossover trial of adjunctive raloxifene treatment (120 mg per day orally) and performed a facial emotion recognition task during functional magnetic resonance imaging after each treatment phase. Two-sample t-tests in regions of interest selected a priori were performed to assess activation differences between raloxifene and placebo conditions during the recognition of angry faces. Adjunctive raloxifene significantly increased activation in the right hippocampus and left inferior frontal gyrus compared with the placebo condition (family-wise error, Pschizophrenia. These findings support the hypothesis that estrogen plays a modifying role in schizophrenia and shows that adjunctive raloxifene treatment may reverse abnormal neural activity during facial emotion recognition, which is relevant to impaired social functioning in men and women with schizophrenia. PMID:27138794

  14. [Comparative studies of face recognition].

    Science.gov (United States)

    Kawai, Nobuyuki

    2012-07-01

    Every human being is proficient in face recognition. However, the reason for and the manner in which humans have attained such an ability remain unknown. These questions can be best answered-through comparative studies of face recognition in non-human animals. Studies in both primates and non-primates show that not only primates, but also non-primates possess the ability to extract information from their conspecifics and from human experimenters. Neural specialization for face recognition is shared with mammals in distant taxa, suggesting that face recognition evolved earlier than the emergence of mammals. A recent study indicated that a social insect, the golden paper wasp, can distinguish their conspecific faces, whereas a closely related species, which has a less complex social lifestyle with just one queen ruling a nest of underlings, did not show strong face recognition for their conspecifics. Social complexity and the need to differentiate between one another likely led humans to evolve their face recognition abilities.

  15. [Face recognition in patients with schizophrenia].

    Science.gov (United States)

    Doi, Hirokazu; Shinohara, Kazuyuki

    2012-07-01

    It is well known that patients with schizophrenia show severe deficiencies in social communication skills. These deficiencies are believed to be partly derived from abnormalities in face recognition. However, the exact nature of these abnormalities exhibited by schizophrenic patients with respect to face recognition has yet to be clarified. In the present paper, we review the main findings on face recognition deficiencies in patients with schizophrenia, particularly focusing on abnormalities in the recognition of facial expression and gaze direction, which are the primary sources of information of others' mental states. The existing studies reveal that the abnormal recognition of facial expression and gaze direction in schizophrenic patients is attributable to impairments in both perceptual processing of visual stimuli, and cognitive-emotional responses to social information. Furthermore, schizophrenic patients show malfunctions in distributed neural regions, ranging from the fusiform gyrus recruited in the structural encoding of facial stimuli, to the amygdala which plays a primary role in the detection of the emotional significance of stimuli. These findings were obtained from research in patient groups with heterogeneous characteristics. Because previous studies have indicated that impairments in face recognition in schizophrenic patients might vary according to the types of symptoms, it is of primary importance to compare the nature of face recognition deficiencies and the impairments of underlying neural functions across sub-groups of patients.

  16. Development of cerebral lateralisation for recognition of emotions in chimeric faces in children aged 5 to 11.

    Science.gov (United States)

    Workman, Lance; Chilvers, Louise; Yeomans, Heather; Taylor, Sandie

    2006-11-01

    In contrast to research into the development of language laterality, there has been relatively little research into the development of lateralisation of emotional processing. If language lateralisation begins in childhood and is complete by puberty (Lenneberg, 1967) it seems reasonable that the lateralisation of the perception of emotions might also occur during this period. In this study a split field chimeric faces test using the six universal facial expressions proposed by Ekman and Friesen (1971), an emotion in the eyes test, and a situational cartoon test were administered to three groups of children aged 5/6, 7/8, and 10/11. No overall hemispace advantage was seen for the 5/6-year-old group, but by the age of 10/11 a clear left hemispace advantage (right hemisphere) was found for all six emotions. Such a pattern is comparable to a previous study that made use of adults on this task (Workman, Peters, & Taylor, 2000b). Moreover, a significant positive correlation between a child's ability to recognise emotions in cartoon situations and their left hemispatial advantage score was uncovered. Finally, a significant positive correlation between a child's ability to recognise emotions in the eyes of others and their left hemispatial advantage score was also uncovered. These findings are taken as evidence that there may be a relationship between the development of emotional processing in the right hemisphere and a child's emerging ability to perceive or attend to the emotional states of others. Results are discussed in relation to the child's development of a theory of mind.

  17. Study of Face Recognition Techniques

    Directory of Open Access Journals (Sweden)

    Sangeeta Kaushik

    2014-12-01

    Full Text Available A study of both face recognition and detection techniques is carried out using the algorithms like Principal Component Analysis (PCA, Kernel Principal Component Analysis (KPCA, Linear Discriminant Analysis (LDA and Line Edge Map (LEM. These algorithms show different rates of accuracy under different conditions. The automatic recognition of human faces presents a challenge to the pattern recognition community. Typically, human faces are different in shapes with minor similarity from person to person. Furthermore, lighting condition changes, facial expressions and pose variations further complicate the face recognition task as one of the difficult problems in pattern analysis.

  18. Genetic specificity of face recognition.

    Science.gov (United States)

    Shakeshaft, Nicholas G; Plomin, Robert

    2015-10-13

    Specific cognitive abilities in diverse domains are typically found to be highly heritable and substantially correlated with general cognitive ability (g), both phenotypically and genetically. Recent twin studies have found the ability to memorize and recognize faces to be an exception, being similarly heritable but phenotypically substantially uncorrelated both with g and with general object recognition. However, the genetic relationships between face recognition and other abilities (the extent to which they share a common genetic etiology) cannot be determined from phenotypic associations. In this, to our knowledge, first study of the genetic associations between face recognition and other domains, 2,000 18- and 19-year-old United Kingdom twins completed tests assessing their face recognition, object recognition, and general cognitive abilities. Results confirmed the substantial heritability of face recognition (61%), and multivariate genetic analyses found that most of this genetic influence is unique and not shared with other cognitive abilities.

  19. Facial emotion recognition in remitted depressed women.

    Science.gov (United States)

    Biyik, Utku; Keskin, Duygu; Oguz, Kaya; Akdeniz, Fisun; Gonul, Ali Saffet

    2015-10-01

    Although major depressive disorder (MDD) is primarily characterized by mood symptoms, depressed patients have impairments in facial emotion recognition in many of the basic emotions (anger, fear, happiness, surprise, disgust and sadness). On the other hand, the data in remitted MDD (rMDD) patients is inconsistent and it is not clear that if those impairments persist in remission. To extend the current findings, we applied facial emotion recognition test to a group of remitted depressed women and compared to those of controls. Analyses of variance results showed a significant emotion and group interaction, and in the post hoc analyses, rMDD patients had higher accuracy rate for recognition of sadness compared to those of controls. There were no differences in the reaction time among the patients and controls across the all the basic emotions. The higher recognition rates for sad faces in rMDD patients might contribute to the impairments in social communication and the prognosis of the disease.

  20. Forensic Face Recognition: A Survey

    NARCIS (Netherlands)

    Ali, Tauseef; Veldhuis, Raymond; Spreeuwers, Luuk

    2010-01-01

    Beside a few papers which focus on the forensic aspects of automatic face recognition, there is not much published about it in contrast to the literature on developing new techniques and methodologies for biometric face recognition. In this report, we review forensic facial identification which is t

  1. Side-View Face Recognition

    NARCIS (Netherlands)

    Santemiz, Pinar; Spreeuwers, Luuk J.; Veldhuis, Raymond N.J.; Biggelaar , van den Olivier

    2011-01-01

    As a widely used biometrics, face recognition has many advantages such as being non-intrusive, natural and passive. On the other hand, in real-life scenarios with uncontrolled environment, pose variation up to side-view positions makes face recognition a challenging work. In this paper we discuss th

  2. Comparing Face Detection and Recognition Techniques

    OpenAIRE

    Korra, Jyothi

    2016-01-01

    This paper implements and compares different techniques for face detection and recognition. One is find where the face is located in the images that is face detection and second is face recognition that is identifying the person. We study three techniques in this paper: Face detection using self organizing map (SOM), Face recognition by projection and nearest neighbor and Face recognition using SVM.

  3. Holistic processing predicts face recognition.

    Science.gov (United States)

    Richler, Jennifer J; Cheung, Olivia S; Gauthier, Isabel

    2011-04-01

    The concept of holistic processing is a cornerstone of face-recognition research. In the study reported here, we demonstrated that holistic processing predicts face-recognition abilities on the Cambridge Face Memory Test and on a perceptual face-identification task. Our findings validate a large body of work that relies on the assumption that holistic processing is related to face recognition. These findings also reconcile the study of face recognition with the perceptual-expertise work it inspired; such work links holistic processing of objects with people's ability to individuate them. Our results differ from those of a recent study showing no link between holistic processing and face recognition. This discrepancy can be attributed to the use in prior research of a popular but flawed measure of holistic processing. Our findings salvage the central role of holistic processing in face recognition and cast doubt on a subset of the face-perception literature that relies on a problematic measure of holistic processing.

  4. Visual perception and processing in children with 22q11.2 deletion syndrome: associations with social cognition measures of face identity and emotion recognition.

    Science.gov (United States)

    McCabe, Kathryn L; Marlin, Stuart; Cooper, Gavin; Morris, Robin; Schall, Ulrich; Murphy, Declan G; Murphy, Kieran C; Campbell, Linda E

    2016-01-01

    People with 22q11.2 deletion syndrome (22q11DS) have difficulty processing social information including facial identity and emotion processing. However, difficulties with visual and attentional processes may play a role in difficulties observed with these social cognitive skills. A cross-sectional study investigated visual perception and processing as well as facial processing abilities in a group of 49 children and adolescents with 22q11DS and 30 age and socio-economic status-matched healthy sibling controls using the Birmingham Object Recognition Battery and face processing sub-tests from the MRC face processing skills battery. The 22q11DS group demonstrated poorer performance on all measures of visual perception and processing, with greatest impairment on perceptual processes relating to form perception as well as object recognition and memory. In addition, form perception was found to make a significant and unique contribution to higher order social-perceptual processing (face identity) in the 22q11DS group. The findings indicate evidence for impaired visual perception and processing capabilities in 22q11DS. In turn, these were found to influence cognitive skills needed for social processes such as facial identity recognition in the children with 22q11DS.

  5. Effective indexing for face recognition

    Science.gov (United States)

    Sochenkov, I.; Sochenkova, A.; Vokhmintsev, A.; Makovetskii, A.; Melnikov, A.

    2016-09-01

    Face recognition is one of the most important tasks in computer vision and pattern recognition. Face recognition is useful for security systems to provide safety. In some situations it is necessary to identify the person among many others. In this case this work presents new approach in data indexing, which provides fast retrieval in big image collections. Data indexing in this research consists of five steps. First, we detect the area containing face, second we align face, and then we detect areas containing eyes and eyebrows, nose, mouth. After that we find key points of each area using different descriptors and finally index these descriptors with help of quantization procedure. The experimental analysis of this method is performed. This paper shows that performing method has results at the level of state-of-the-art face recognition methods, but it is also gives results fast that is important for the systems that provide safety.

  6. Oxytocin improves emotion recognition for older males.

    Science.gov (United States)

    Campbell, Anna; Ruffman, Ted; Murray, Janice E; Glue, Paul

    2014-10-01

    Older adults (≥60 years) perform worse than young adults (18-30 years) when recognizing facial expressions of emotion. The hypothesized cause of these changes might be declines in neurotransmitters that could affect information processing within the brain. In the present study, we examined the neuropeptide oxytocin that functions to increase neurotransmission. Research suggests that oxytocin benefits the emotion recognition of less socially able individuals. Men tend to have lower levels of oxytocin and older men tend to have worse emotion recognition than older women; therefore, there is reason to think that older men will be particularly likely to benefit from oxytocin. We examined this idea using a double-blind design, testing 68 older and 68 young adults randomly allocated to receive oxytocin nasal spray (20 international units) or placebo. Forty-five minutes afterward they completed an emotion recognition task assessing labeling accuracy for angry, disgusted, fearful, happy, neutral, and sad faces. Older males receiving oxytocin showed improved emotion recognition relative to those taking placebo. No differences were found for older females or young adults. We hypothesize that oxytocin facilitates emotion recognition by improving neurotransmission in the group with the worst emotion recognition.

  7. Emotion Recognition using Speech Features

    CERN Document Server

    Rao, K Sreenivasa

    2013-01-01

    Emotion Recognition Using Speech Features” covers emotion-specific features present in speech and discussion of suitable models for capturing emotion-specific information for distinguishing different emotions.  The content of this book is important for designing and developing  natural and sophisticated speech systems. Drs. Rao and Koolagudi lead a discussion of how emotion-specific information is embedded in speech and how to acquire emotion-specific knowledge using appropriate statistical models. Additionally, the authors provide information about using evidence derived from various features and models. The acquired emotion-specific knowledge is useful for synthesizing emotions. Discussion includes global and local prosodic features at syllable, word and phrase levels, helpful for capturing emotion-discriminative information; use of complementary evidences obtained from excitation sources, vocal tract systems and prosodic features in order to enhance the emotion recognition performance;  and pro...

  8. Optimizing Face Recognition Using PCA

    Directory of Open Access Journals (Sweden)

    Manal Abdullah

    2012-03-01

    Full Text Available Principle Component Analysis PCA is a classical feature extraction and data representation technique widely used in pattern recognition. It is one of the most successful techniques in face recognition. But it has drawback of high computational especially for big size database. This paper conducts a study to optimize the time complexity of PCA (eigenfaces that does not affects the recognition performance. The authors minimize the participated eigenvectors which consequently decreases the computational time. A comparison is done to compare the differences between the recognition time in the original algorithm and in the enhanced algorithm. The performance of the original and the enhanced proposed algorithm is tested on face94 face database. Experimental results show that the recognition time is reduced by 35% by applying our proposed enhanced algorithm. DET Curves are used to illustrate the experimental results.

  9. Optimizing Face Recognition Using PCA

    Directory of Open Access Journals (Sweden)

    Manal Abdullah

    2012-04-01

    Full Text Available Principle Component Analysis PCA is a classical feature extraction and data representation technique widely used in pattern recognition. It is one of the most successful techniques in face recognition. But it has drawback of high computational especially for big size database. This paper conducts a study to optimize the time complexity of PCA (eigenfaces that does not affects the recognition performance. The authorsminimize the participated eigenvectors which consequently decreases the computational time. A comparison is done to compare the differences between the recognition time in the original algorithm and in the enhanced algorithm. The performance of the original and the enhanced proposed algorithm is tested on face94 face database. Experimental results show that the recognition time is reduced by 35% by applying our proposed enhanced algorithm. DET Curves are used to illustrate the experimental results.

  10. Similarity measures for face recognition

    CERN Document Server

    Vezzetti, Enrico

    2015-01-01

    Face recognition has several applications, including security, such as (authentication and identification of device users and criminal suspects), and in medicine (corrective surgery and diagnosis). Facial recognition programs rely on algorithms that can compare and compute the similarity between two sets of images. This eBook explains some of the similarity measures used in facial recognition systems in a single volume. Readers will learn about various measures including Minkowski distances, Mahalanobis distances, Hansdorff distances, cosine-based distances, among other methods. The book also summarizes errors that may occur in face recognition methods. Computer scientists "facing face" and looking to select and test different methods of computing similarities will benefit from this book. The book is also useful tool for students undertaking computer vision courses.

  11. FILTWAM and Voice Emotion Recognition

    NARCIS (Netherlands)

    Bahreini, Kiavash; Nadolski, Rob; Westera, Wim

    2014-01-01

    This paper introduces the voice emotion recognition part of our framework for improving learning through webcams and microphones (FILTWAM). This framework enables multimodal emotion recognition of learners during game-based learning. The main goal of this study is to validate the use of microphone d

  12. FILTWAM and Voice Emotion Recognition

    NARCIS (Netherlands)

    Bahreini, Kiavash; Nadolski, Rob; Westera, Wim

    2014-01-01

    This paper introduces the voice emotion recognition part of our framework for improving learning through webcams and microphones (FILTWAM). This framework enables multimodal emotion recognition of learners during game-based learning. The main goal of this study is to validate the use of microphone

  13. Recognition of emotion in others

    NARCIS (Netherlands)

    Frijda, N.H.; Paglieri, F.

    2012-01-01

    This chapter argues that recognition of emotion had a simple basis and a highly complex edifice above it. Its basis is formed by catching intent from expressive and other emotional behavior, using elementary principles of perceptual integration. In intent recognition, mirror neurons under particular

  14. Pilgrims Face Recognition Dataset -- HUFRD

    OpenAIRE

    Aly, Salah A.

    2012-01-01

    In this work, we define a new pilgrims face recognition dataset, called HUFRD dataset. The new developed dataset presents various pilgrims' images taken from outside the Holy Masjid El-Harram in Makkah during the 2011-2012 Hajj and Umrah seasons. Such dataset will be used to test our developed facial recognition and detection algorithms, as well as assess in the missing and found recognition system \\cite{crowdsensing}.

  15. Facial Emotion Recognition in Bipolar Disorder and Healthy Aging.

    Science.gov (United States)

    Altamura, Mario; Padalino, Flavia A; Stella, Eleonora; Balzotti, Angela; Bellomo, Antonello; Palumbo, Rocco; Di Domenico, Alberto; Mammarella, Nicola; Fairfield, Beth

    2016-03-01

    Emotional face recognition is impaired in bipolar disorder, but it is not clear whether this is specific for the illness. Here, we investigated how aging and bipolar disorder influence dynamic emotional face recognition. Twenty older adults, 16 bipolar patients, and 20 control subjects performed a dynamic affective facial recognition task and a subsequent rating task. Participants pressed a key as soon as they were able to discriminate whether the neutral face was assuming a happy or angry facial expression and then rated the intensity of each facial expression. Results showed that older adults recognized happy expressions faster, whereas bipolar patients recognized angry expressions faster. Furthermore, both groups rated emotional faces more intensely than did the control subjects. This study is one of the first to compare how aging and clinical conditions influence emotional facial recognition and underlines the need to consider the role of specific and common factors in emotional face recognition.

  16. Age-invariant face recognition.

    Science.gov (United States)

    Park, Unsang; Tong, Yiying; Jain, Anil K

    2010-05-01

    One of the challenges in automatic face recognition is to achieve temporal invariance. In other words, the goal is to come up with a representation and matching scheme that is robust to changes due to facial aging. Facial aging is a complex process that affects both the 3D shape of the face and its texture (e.g., wrinkles). These shape and texture changes degrade the performance of automatic face recognition systems. However, facial aging has not received substantial attention compared to other facial variations due to pose, lighting, and expression. We propose a 3D aging modeling technique and show how it can be used to compensate for the age variations to improve the face recognition performance. The aging modeling technique adapts view-invariant 3D face models to the given 2D face aging database. The proposed approach is evaluated on three different databases (i.g., FG-NET, MORPH, and BROWNS) using FaceVACS, a state-of-the-art commercial face recognition engine.

  17. Steroids facing emotions

    NARCIS (Netherlands)

    Putman, P.L.J.

    2006-01-01

    The studies reported in this thesis have been performed to gain a better understanding about motivational mediators of selective attention and memory for emotionally relevant stimuli, and about the roles that some steroid hormones play in regulation of human motivation and emotion. The stimuli used

  18. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  19. Covert face recognition relies on affective valence in congenital prosopagnosia.

    Science.gov (United States)

    Bate, Sarah; Haslam, Catherine; Jansari, Ashok; Hodgson, Timothy L

    2009-06-01

    Dominant accounts of covert recognition in prosopagnosia assume subthreshold activation of face representations created prior to onset of the disorder. Yet, such accounts cannot explain covert recognition in congenital prosopagnosia, where the impairment is present from birth. Alternatively, covert recognition may rely on affective valence, yet no study has explored this possibility. The current study addressed this issue in 3 individuals with congenital prosopagnosia, using measures of the scanpath to indicate recognition. Participants were asked to memorize 30 faces paired with descriptions of aggressive, nice, or neutral behaviours. In a later recognition test, eye movements were monitored while participants discriminated studied from novel faces. Sampling was reduced for studied--nice compared to studied--aggressive faces, and performance for studied--neutral and novel faces fell between these two conditions. This pattern of findings suggests that (a) positive emotion can facilitate processing in prosopagnosia, and (b) covert recognition may rely on emotional valence rather than familiarity.

  20. Multibiometrics for face recognition

    NARCIS (Netherlands)

    Veldhuis, Raymond; Deravi, Farzin; Tao, Qian

    2008-01-01

    Fusion is a popular practice to combine multiple sources of biometric information to achieve systems with greater performance and flexibility. In this paper various approaches to fusion within a multibiometrics context are considered and an application to the fusion of 2D and 3D face information is

  1. Multibiometrics for face recognition

    NARCIS (Netherlands)

    Veldhuis, Raymond N.J.; Deravi, Farzin; Tao, Q.

    Fusion is a popular practice to combine multiple sources of biometric information to achieve systems with greater performance and flexibility. In this paper various approaches to fusion within a multibiometrics context are considered and an application to the fusion of 2D and 3D face information is

  2. Brain Structural Correlates of Emotion Recognition in Psychopaths

    Science.gov (United States)

    Batalla, Iolanda; Kosson, David; Menchón, José M; Pifarré, Josep; Bosque, Javier; Cardoner, Narcís; Soriano-Mas, Carles

    2016-01-01

    Individuals with psychopathy present deficits in the recognition of facial emotional expressions. However, the nature and extent of these alterations are not fully understood. Furthermore, available data on the functional neural correlates of emotional face recognition deficits in adult psychopaths have provided mixed results. In this context, emotional face morphing tasks may be suitable for clarifying mild and emotion-specific impairments in psychopaths. Likewise, studies exploring corresponding anatomical correlates may be useful for disentangling available neurofunctional evidence based on the alleged neurodevelopmental roots of psychopathic traits. We used Voxel-Based Morphometry and a morphed emotional face expression recognition task to evaluate the relationship between regional gray matter (GM) volumes and facial emotion recognition deficits in male psychopaths. In comparison to male healthy controls, psychopaths showed deficits in the recognition of sad, happy and fear emotional expressions. In subsequent brain imaging analyses psychopaths with better recognition of facial emotional expressions showed higher volume in the prefrontal cortex (orbitofrontal, inferior frontal and dorsomedial prefrontal cortices), somatosensory cortex, anterior insula, cingulate cortex and the posterior lobe of the cerebellum. Amygdala and temporal lobe volumes contributed to better emotional face recognition in controls only. These findings provide evidence suggesting that variability in brain morphometry plays a role in accounting for psychopaths’ impaired ability to recognize emotional face expressions, and may have implications for comprehensively characterizing the empathy and social cognition dysfunctions typically observed in this population of subjects. PMID:27175777

  3. Brain Structural Correlates of Emotion Recognition in Psychopaths.

    Directory of Open Access Journals (Sweden)

    Vanessa Pera-Guardiola

    Full Text Available Individuals with psychopathy present deficits in the recognition of facial emotional expressions. However, the nature and extent of these alterations are not fully understood. Furthermore, available data on the functional neural correlates of emotional face recognition deficits in adult psychopaths have provided mixed results. In this context, emotional face morphing tasks may be suitable for clarifying mild and emotion-specific impairments in psychopaths. Likewise, studies exploring corresponding anatomical correlates may be useful for disentangling available neurofunctional evidence based on the alleged neurodevelopmental roots of psychopathic traits. We used Voxel-Based Morphometry and a morphed emotional face expression recognition task to evaluate the relationship between regional gray matter (GM volumes and facial emotion recognition deficits in male psychopaths. In comparison to male healthy controls, psychopaths showed deficits in the recognition of sad, happy and fear emotional expressions. In subsequent brain imaging analyses psychopaths with better recognition of facial emotional expressions showed higher volume in the prefrontal cortex (orbitofrontal, inferior frontal and dorsomedial prefrontal cortices, somatosensory cortex, anterior insula, cingulate cortex and the posterior lobe of the cerebellum. Amygdala and temporal lobe volumes contributed to better emotional face recognition in controls only. These findings provide evidence suggesting that variability in brain morphometry plays a role in accounting for psychopaths' impaired ability to recognize emotional face expressions, and may have implications for comprehensively characterizing the empathy and social cognition dysfunctions typically observed in this population of subjects.

  4. Automated Face Recognition System

    Science.gov (United States)

    1992-12-01

    atestfOl.feature-vectjJ -averageljJ); for(j=l; <num-coefsj++) for(i= 5 num-train-faces;i++) sdlQjI -(btrainhil.feaure..vecU1- veagU (btraintil.feature- vecU ... vecU ])* (atest(O1.feature-vecUJ - btrain[iI.feature- vecU ]) + temp; btrain(ii.distance = sqrt ( (double) temp); I**** Store the k-nearest neighbors rank

  5. Multithread Face Recognition in Cloud

    Directory of Open Access Journals (Sweden)

    Dakshina Ranjan Kisku

    2016-01-01

    Full Text Available Faces are highly challenging and dynamic objects that are employed as biometrics evidence in identity verification. Recently, biometrics systems have proven to be an essential security tools, in which bulk matching of enrolled people and watch lists is performed every day. To facilitate this process, organizations with large computing facilities need to maintain these facilities. To minimize the burden of maintaining these costly facilities for enrollment and recognition, multinational companies can transfer this responsibility to third-party vendors who can maintain cloud computing infrastructures for recognition. In this paper, we showcase cloud computing-enabled face recognition, which utilizes PCA-characterized face instances and reduces the number of invariant SIFT points that are extracted from each face. To achieve high interclass and low intraclass variances, a set of six PCA-characterized face instances is computed on columns of each face image by varying the number of principal components. Extracted SIFT keypoints are fused using sum and max fusion rules. A novel cohort selection technique is applied to increase the total performance. The proposed protomodel is tested on BioID and FEI face databases, and the efficacy of the system is proven based on the obtained results. We also compare the proposed method with other well-known methods.

  6. Human Emotion Recognition From Speech

    Directory of Open Access Journals (Sweden)

    Miss. Aparna P. Wanare

    2014-07-01

    Full Text Available Speech Emotion Recognition is a recent research topic in the Human Computer Interaction (HCI field. The need has risen for a more natural communication interface between humans and computer, as computers have become an integral part of our lives. A lot of work currently going on to improve the interaction between humans and computers. To achieve this goal, a computer would have to be able to distinguish its present situation and respond differently depending on that observation. Part of this process involves understanding a user‟s emotional state. To make the human computer interaction more natural, the objective is that computer should be able to recognize emotional states in the same as human does. The efficiency of emotion recognition system depends on type of features extracted and classifier used for detection of emotions. The proposed system aims at identification of basic emotional states such as anger, joy, neutral and sadness from human speech. While classifying different emotions, features like MFCC (Mel Frequency Cepstral Coefficient and Energy is used. In this paper, Standard Emotional Database i.e. English Database is used which gives the satisfactory detection of emotions than recorded samples of emotions. This methodology describes and compares the performances of Learning Vector Quantization Neural Network (LVQ NN, Multiclass Support Vector Machine (SVM and their combination for emotion recognition.

  7. A Survey: Face Recognition Techniques

    Directory of Open Access Journals (Sweden)

    Muhammad Sharif

    2012-12-01

    Full Text Available In this study, the existing techniques of face recognition are to be encountered along with their pros and cons to conduct a brief survey. The most general methods include Eigenface (Eigenfeatures, Hidden Markov Model (HMM, geometric based and template matching approaches. This survey actually performs analysis on these approaches in order to constitute face representations which will be discussed as under. In the second phase of the survey, factors affecting the recognition rates and processes are also discussed along with the solutions provided by different authors.

  8. Face Recognition using Curvelet Transform

    CERN Document Server

    Cohen, Rami

    2011-01-01

    Face recognition has been studied extensively for more than 20 years now. Since the beginning of 90s the subject has became a major issue. This technology is used in many important real-world applications, such as video surveillance, smart cards, database security, internet and intranet access. This report reviews recent two algorithms for face recognition which take advantage of a relatively new multiscale geometric analysis tool - Curvelet transform, for facial processing and feature extraction. This transform proves to be efficient especially due to its good ability to detect curves and lines, which characterize the human's face. An algorithm which is based on the two algorithms mentioned above is proposed, and its performance is evaluated on three data bases of faces: AT&T (ORL), Essex Grimace and Georgia-Tech. k-nearest neighbour (k-NN) and Support vector machine (SVM) classifiers are used, along with Principal Component Analysis (PCA) for dimensionality reduction. This algorithm shows good results, ...

  9. Face Recognition in Various Illuminations

    Directory of Open Access Journals (Sweden)

    Saurabh D. Parmar,

    2014-05-01

    Full Text Available Face Recognition (FR under various illuminations is very challenging. Normalization technique is useful for removing the dimness and shadow from the facial image which reduces the effect of illumination variations still retaining the necessary information of the face. The robust local feature extractor which is the gray-scale invariant texture called Local Binary Pattern (LBP is helpful for feature extraction. K-Nearest Neighbor classifier is utilized for the purpose of classification and to match the face images from the database. Experimental results were based on Yale-B database with three different sub categories. The proposed method has been tested to robust face recognition in various illumination conditions. Extensive experiment shows that the proposed system can achieve very encouraging performance in various illumination environments.

  10. Covert Face Recognition without Prosopagnosia

    Directory of Open Access Journals (Sweden)

    H. D. Ellis

    1993-01-01

    Full Text Available An experiment is reported where subjects were presented with familiar or unfamiliar faces for supraliminal durations or for durations individually assessed as being below the threshold for recognition. Their electrodermal responses to each stimulus were measured and the results showed higher peak amplitude skin conductance responses for familiar than for unfamiliar faces, regardless of whether they had been displayed supraliminally or subliminally. A parallel is drawn between elevated skin conductance responses to subliminal stimuli and findings of covert recognition of familiar faces in prosopagnosic patients, some of whom show increased electrodermal activity (EDA to previously familiar faces. The supraliminal presentation data also served to replicate similar work by Tranel et al (1985. The results are considered alongside other data indicating the relation between non-conscious, “automatic” aspects of normal visual information processing and abilities which can be found to be preserved without awareness after brain injury.

  11. Face recognition using Krawtchouk moment

    Indian Academy of Sciences (India)

    J Sheeba Rani; D Devaraj

    2012-08-01

    Feature extraction is one of the important tasks in face recognition. Moments are widely used feature extractor due to their superior discriminatory power and geometrical invariance. Moments generally capture the global features of the image. This paper proposes Krawtchouk moment for feature extraction in face recognition system, which has the ability to extract local features from any region of interest. Krawtchouk moment is used to extract both local features and global features of the face. The extracted features are fused using summed normalized distance strategy. Nearest neighbour classifier is employed to classify the faces. The proposed method is tested using ORL and Yale databases. Experimental results show that the proposed method is able to recognize images correctly, even if the images are corrupted with noise and possess change in facial expression and tilt.

  12. Face recognition, a landmarks tale

    NARCIS (Netherlands)

    Beumer, Gerrit Maarten

    2009-01-01

    Face recognition is a technology that appeals to the imagination of many people. This is particularly reflected in the popularity of science-fiction films and forensic detective series such as CSI, CSI New York, CSI Miami, Bones and NCIS. Although these series tend to be set in the present, their a

  13. Face recognition, a landmarks tale

    NARCIS (Netherlands)

    Beumer, G.M.

    2009-01-01

    Face recognition is a technology that appeals to the imagination of many people. This is particularly reflected in the popularity of science-fiction films and forensic detective series such as CSI, CSI New York, CSI Miami, Bones and NCIS. Although these series tend to be set in the present, their ap

  14. Towards automatic forensic face recognition

    NARCIS (Netherlands)

    Ali, Tauseef; Spreeuwers, Luuk; Veldhuis, Raymond

    2011-01-01

    In this paper we present a methodology and experimental results for evidence evaluation in the context of forensic face recognition. In forensic applications, the matching score (hereafter referred to as similarity score) from a biometric system must be represented as a Likelihood Ratio (LR). In our

  15. Facial emotion recognition in paranoid schizophrenia and autism spectrum disorder.

    Science.gov (United States)

    Sachse, Michael; Schlitt, Sabine; Hainz, Daniela; Ciaramidaro, Angela; Walter, Henrik; Poustka, Fritz; Bölte, Sven; Freitag, Christine M

    2014-11-01

    Schizophrenia (SZ) and autism spectrum disorder (ASD) share deficits in emotion processing. In order to identify convergent and divergent mechanisms, we investigated facial emotion recognition in SZ, high-functioning ASD (HFASD), and typically developed controls (TD). Different degrees of task difficulty and emotion complexity (face, eyes; basic emotions, complex emotions) were used. Two Benton tests were implemented in order to elicit potentially confounding visuo-perceptual functioning and facial processing. Nineteen participants with paranoid SZ, 22 with HFASD and 20 TD were included, aged between 14 and 33 years. Individuals with SZ were comparable to TD in all obtained emotion recognition measures, but showed reduced basic visuo-perceptual abilities. The HFASD group was impaired in the recognition of basic and complex emotions compared to both, SZ and TD. When facial identity recognition was adjusted for, group differences remained for the recognition of complex emotions only. Our results suggest that there is a SZ subgroup with predominantly paranoid symptoms that does not show problems in face processing and emotion recognition, but visuo-perceptual impairments. They also confirm the notion of a general facial and emotion recognition deficit in HFASD. No shared emotion recognition deficit was found for paranoid SZ and HFASD, emphasizing the differential cognitive underpinnings of both disorders.

  16. [Face recognition in patients with autism spectrum disorders].

    Science.gov (United States)

    Kita, Yosuke; Inagaki, Masumi

    2012-07-01

    The present study aimed to review previous research conducted on face recognition in patients with autism spectrum disorders (ASD). Face recognition is a key question in the ASD research field because it can provide clues for elucidating the neural substrates responsible for the social impairment of these patients. Historically, behavioral studies have reported low performance and/or unique strategies of face recognition among ASD patients. However, the performance and strategy of ASD patients is comparable to those of the control group, depending on the experimental situation or developmental stage, suggesting that face recognition of ASD patients is not entirely impaired. Recent brain function studies, including event-related potential and functional magnetic resonance imaging studies, have investigated the cognitive process of face recognition in ASD patients, and revealed impaired function in the brain's neural network comprising the fusiform gyrus and amygdala. This impaired function is potentially involved in the diminished preference for faces, and in the atypical development of face recognition, eliciting symptoms of unstable behavioral characteristics in these patients. Additionally, face recognition in ASD patients is examined from a different perspective, namely self-face recognition, and facial emotion recognition. While the former topic is intimately linked to basic social abilities such as self-other discrimination, the latter is closely associated with mentalizing. Further research on face recognition in ASD patients should investigate the connection between behavioral and neurological specifics in these patients, by considering developmental changes and the spectrum clinical condition of ASD.

  17. Embedded Face Detection and Recognition

    Directory of Open Access Journals (Sweden)

    Göksel Günlü

    2012-10-01

    Full Text Available The need to increase security in open or public spaces has in turn given rise to the requirement to monitor these spaces and analyse those images on‐site and on‐time. At this point, the use of smart cameras ‐ of which the popularity has been increasing ‐ is one step ahead. With sensors and Digital Signal Processors (DSPs, smart cameras generate ad hoc results by analysing the numeric images transmitted from the sensor by means of a variety of image‐processing algorithms. Since the images are not transmitted to a distance processing unit but rather are processed inside the camera, it does not necessitate high‐ bandwidth networks or high processor powered systems; it can instantaneously decide on the required access. Nonetheless, on account of restricted memory, processing power and overall power, image processing algorithms need to be developed and optimized for embedded processors. Among these algorithms, one of the most important is for face detection and recognition. A number of face detection and recognition methods have been proposed recently and many of these methods have been tested on general‐purpose processors. In smart cameras ‐ which are real‐life applications of such methods ‐ the widest use is on DSPs. In the present study, the Viola‐Jones face detection method ‐ which was reported to run faster on PCs ‐ was optimized for DSPs; the face recognition method was combined with the developed sub‐region and mask‐based DCT (Discrete Cosine Transform. As the employed DSP is a fixed‐point processor, the processes were performed with integers insofar as it was possible. To enable face recognition, the image was divided into sub‐ regions and from each sub‐region the robust coefficients against disruptive elements ‐ like face expression, illumination, etc. ‐ were selected as the features. The discrimination of the selected features was enhanced via LDA (Linear Discriminant Analysis and then employed for

  18. Embedded Face Detection and Recognition

    Directory of Open Access Journals (Sweden)

    Göksel Günlü

    2012-10-01

    Full Text Available The need to increase security in open or public spaces has in turn given rise to the requirement to monitor these spaces and analyse those images on-site and on-time. At this point, the use of smart cameras – of which the popularity has been increasing – is one step ahead. With sensors and Digital Signal Processors (DSPs, smart cameras generate ad hoc results by analysing the numeric images transmitted from the sensor by means of a variety of image-processing algorithms. Since the images are not transmitted to a distance processing unit but rather are processed inside the camera, it does not necessitate high-bandwidth networks or high processor powered systems; it can instantaneously decide on the required access. Nonetheless, on account of restricted memory, processing power and overall power, image processing algorithms need to be developed and optimized for embedded processors. Among these algorithms, one of the most important is for face detection and recognition. A number of face detection and recognition methods have been proposed recently and many of these methods have been tested on general-purpose processors. In smart cameras – which are real-life applications of such methods – the widest use is on DSPs. In the present study, the Viola-Jones face detection method – which was reported to run faster on PCs – was optimized for DSPs; the face recognition method was combined with the developed sub-region and mask-based DCT (Discrete Cosine Transform. As the employed DSP is a fixed-point processor, the processes were performed with integers insofar as it was possible. To enable face recognition, the image was divided into sub-regions and from each sub-region the robust coefficients against disruptive elements – like face expression, illumination, etc. – were selected as the features. The discrimination of the selected features was enhanced via LDA (Linear Discriminant Analysis and then employed for recognition. Thanks to its

  19. Acoustic modeling for emotion recognition

    CERN Document Server

    Anne, Koteswara Rao; Vankayalapati, Hima Deepthi

    2015-01-01

     This book presents state of art research in speech emotion recognition. Readers are first presented with basic research and applications – gradually more advance information is provided, giving readers comprehensive guidance for classify emotions through speech. Simulated databases are used and results extensively compared, with the features and the algorithms implemented using MATLAB. Various emotion recognition models like Linear Discriminant Analysis (LDA), Regularized Discriminant Analysis (RDA), Support Vector Machines (SVM) and K-Nearest neighbor (KNN) and are explored in detail using prosody and spectral features, and feature fusion techniques.

  20. Emotion recognition during cocaine intoxication.

    Science.gov (United States)

    Kuypers, K P C; Steenbergen, L; Theunissen, E L; Toennes, S W; Ramaekers, J G

    2015-11-01

    Chronic or repeated cocaine use has been linked to impairments in social skills. It is not clear whether cocaine is responsible for this impairment or whether other factors, like polydrug use, distort the observed relation. We aimed to investigate this relation by means of a placebo-controlled experimental study. Additionally, associations between stressor-related activity (cortisol, cardiovascular parameters) induced by the biological stressor cocaine, and potential cocaine effects on emotion recognition were studied. Twenty-four healthy recreational cocaine users participated in this placebo-controlled within-subject study. Participants were tested between 1 and 2 h after treatment with oral cocaine (300 mg) or placebo. Emotion recognition of low and high intensity expressions of basic emotions (fear, anger, disgust, sadness, and happiness) was tested. Findings show that cocaine impaired recognition of negative emotions; this was mediated by the intensity of the presented emotions. When high intensity expressions of Anger and Disgust were shown, performance under influence of cocaine 'normalized' to placebo-like levels while it made identification of Sadness more difficult. The normalization of performance was most notable for participants with the largest cortisol responses in the cocaine condition compared to placebo. It was demonstrated that cocaine impairs recognition of negative emotions, depending on the intensity of emotion expression and cortisol response.

  1. Risk for Bipolar Disorder is Associated with Face-Processing Deficits across Emotions

    Science.gov (United States)

    Brotman, Melissa A.; Skup, Martha; Rich, Brendan A.; Blair, Karina S.; Pine, Daniel S.; Blair, James R.; Leibenluft, Ellen

    2008-01-01

    The relationship between the risks for face-emotion labeling deficits and bipolar disorder (BD) among youths is examined. Findings show that youths at risk for BD did not show specific face-emotion recognition deficits. The need to provide more intense emotional information for face-emotion labeling of patients and at-risk youths is also discussed.

  2. Risk for Bipolar Disorder is Associated with Face-Processing Deficits across Emotions

    Science.gov (United States)

    Brotman, Melissa A.; Skup, Martha; Rich, Brendan A.; Blair, Karina S.; Pine, Daniel S.; Blair, James R.; Leibenluft, Ellen

    2008-01-01

    The relationship between the risks for face-emotion labeling deficits and bipolar disorder (BD) among youths is examined. Findings show that youths at risk for BD did not show specific face-emotion recognition deficits. The need to provide more intense emotional information for face-emotion labeling of patients and at-risk youths is also discussed.

  3. Face Recognition in Uncontrolled Environment

    Directory of Open Access Journals (Sweden)

    Radhey Shyam

    2016-08-01

    Full Text Available This paper presents a novel method of facial image representation for face recognition in uncontrolled environment. It is named as augmented local binary patterns (A-LBP that works on both, uniform and non-uniform patterns. It replaces the central non-uniform pattern with a majority value of the neighbouring uniform patterns obtained after processing all neighbouring non-uniform patterns. These patterns are finally combined with the neighbouring uniform patterns, in order to extract discriminatory information from the local descriptors. The experimental results indicate the vitality of the proposed method on particular face datasets, where the images are prone to extreme variations of illumination.

  4. Brain Structural Correlates of Emotion Recognition in Psychopaths

    National Research Council Canada - National Science Library

    Pera-Guardiola, Vanessa; Contreras-Rodríguez, Oren; Batalla, Iolanda; Kosson, David; Menchón, José M; Pifarré, Josep; Bosque, Javier; Cardoner, Narcís; Soriano-Mas, Carles

    2016-01-01

    .... However, the nature and extent of these alterations are not fully understood. Furthermore, available data on the functional neural correlates of emotional face recognition deficits in adult psychopaths have provided mixed results...

  5. Amygdala damage impairs emotion recognition from music.

    Science.gov (United States)

    Gosselin, Nathalie; Peretz, Isabelle; Johnsen, Erica; Adolphs, Ralph

    2007-01-28

    The role of the amygdala in recognition of danger is well established for visual stimuli such as faces. A similar role in another class of emotionally potent stimuli -- music -- has been recently suggested by the study of epileptic patients with unilateral resection of the anteromedian part of the temporal lobe [Gosselin, N., Peretz, I., Noulhiane, M., Hasboun, D., Beckett, C., & Baulac, M., et al. (2005). Impaired recognition of scary music following unilateral temporal lobe excision. Brain, 128(Pt 3), 628-640]. The goal of the present study was to assess the specific role of the amygdala in the recognition of fear from music. To this aim, we investigated a rare subject, S.M., who has complete bilateral damage relatively restricted to the amygdala and not encompassing other sectors of the temporal lobe. In Experiment 1, S.M. and four matched controls were asked to rate the intensity of fear, peacefulness, happiness, and sadness from computer-generated instrumental music purposely created to express those emotions. Subjects also rated the arousal and valence of each musical stimulus. An error detection task assessed basic auditory perceptual function. S.M. performed normally in this perceptual task, but was selectively impaired in the recognition of scary and sad music. In contrast, her recognition of happy music was normal. Furthermore, S.M. judged the scary music to be less arousing and the peaceful music less relaxing than did the controls. Overall, the pattern of impairment in S.M. is similar to that previously reported in patients with unilateral anteromedial temporal lobe damage. S.M.'s impaired emotional judgments occur in the face of otherwise intact processing of musical features that are emotionally determinant. The use of tempo and mode cues in distinguishing happy from sad music was also spared in S.M. Thus, the amygdala appears to be necessary for emotional processing of music rather than the perceptual processing itself.

  6. Age Dependent Face Recognition using Eigenface

    OpenAIRE

    Hlaing Htake Khaung Tin

    2013-01-01

    Face recognition is the most successful form of human surveillance. Face recognition technology, is being used to improve human efficiency when recognition faces, is one of the fastest growing fields in the biometric industry. In the first stage, the age is classified into eleven categories which distinguish the person oldness in terms of age. In the second stage of the process is face recognition based on the predicted age. Age prediction has considerable potential applications in human comp...

  7. Comparison of face Recognition Algorithms on Dummy Faces

    Directory of Open Access Journals (Sweden)

    Aruni Singh

    2012-09-01

    Full Text Available In the age of rising crime face recognition is enormously important in the contexts of computer vision, psychology, surveillance, fraud detection, pattern recognition, neural network, content based video processing, etc. Face is a non intrusive strong biometrics for identification and hence criminals always try to hide their facial organs by different artificial means such as plastic surgery, disguise and dummy. The availability of a comprehensive face database is crucial to test the performance of these face recognition algorithms. However, while existing publicly-available face databases contain face images with a wide variety of poses, illumination, gestures and face occlusions but there is no dummy face database is available in public domain. The contributions of this research paper are: i Preparation of dummy face database of 110 subjects ii Comparison of some texture based, feature based and holistic face recognition algorithms on that dummy face database, iii Critical analysis of these types of algorithms on dummy face database.

  8. Comparison of emotion recognition from facial expression and music.

    Science.gov (United States)

    Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.

  9. Down Syndrome and Automatic Processing of Familiar and Unfamiliar Emotional Faces

    Science.gov (United States)

    Morales, Guadalupe E.; Lopez, Ernesto O.

    2010-01-01

    Participants with Down syndrome (DS) were required to participate in a face recognition experiment to recognize familiar (DS faces) and unfamiliar emotional faces (non DS faces), by using an affective priming paradigm. Pairs of emotional facial stimuli were presented (one face after another) with a short Stimulus Onset Asynchrony of 300…

  10. Emotions affect the recognition of hand postures

    Directory of Open Access Journals (Sweden)

    Carmelo Mario Vicario

    2013-12-01

    Full Text Available The body is closely tied to the processing of social and emotional information. The purpose of this study was to determine whether a relationship between emotions and social attitudes conveyed through gestures exists. Thus we tested the effect of pro-social (i.e. happy face and anti-social (i.e. angry face emotional primes on the ability to detect socially relevant handpostures (i.e. pictures depicting an open/closed hand. In particular, participants were required to establish, as quickly as possible, if the test stimulus (i.e. a hand posture was the same or different, compared to the reference stimulus (i.e. a hand posture previously displayed in the computer screen. Results show that facial primes, displayed between the reference and the test stimuli, influence the recognition of hand postures, according to the social attitude implicitly related to the stimulus. We found that perception of pro-social (i.e. happy face primes resulted in slower RTs in detecting the open hand posture as compared to the closed hand posture. Vice-versa, perception of the anti-social (i.e. angry face prime resulted in slower RTs in detecting the closed hand posture compared to the open hand posture. These results suggest that the social attitude implicitly suggested by the displayed stimuli might represent the conceptual link between emotions and gestures.

  11. Familiarity is not notoriety: Phenomenological accounts of face recognition

    Directory of Open Access Journals (Sweden)

    Davide eLiccione

    2014-09-01

    Full Text Available From a phenomenological perspective, faces are perceived differently from objects as their perception always involves the possibility of a relational engagement (Bredlau, 2011. This is especially true for familiar faces, i.e. faces of people with a history of real relational engagements. Similarly, valence of emotional expressions assumes a key role, as they define the sense and direction of this engagement. Following these premises, the aim of the present study is to demonstrate that face recognition is facilitated by at least two variables, familiarity and emotional expression, and that perception of familiar faces is not influenced by orientation. In order to verify this hypothesis, we implemented a 3x3x2 factorial design, showing seventeen healthy subjects three type of faces (unfamiliar, personally familiar, famous characterized by three different emotional expressions (happy, hungry/sad, neutral and in two different orientation (upright vs inverted. We showed every subject a total of 180 faces with the instructions to give a familiarity judgment. Reaction times were recorded and we found that the recognition of a face is facilitated by personal familiarity and emotional expression, and that this process is otherwise independent from a cognitive elaboration of stimuli and remains stable despite orientation. These results highlight the need to make a distinction between famous and personally familiar faces when studying face perception and to consider its historical aspects from a phenomenological point of view.

  12. Textual emotion recognition for enhancing enterprise computing

    Science.gov (United States)

    Quan, Changqin; Ren, Fuji

    2016-05-01

    The growing interest in affective computing (AC) brings a lot of valuable research topics that can meet different application demands in enterprise systems. The present study explores a sub area of AC techniques - textual emotion recognition for enhancing enterprise computing. Multi-label emotion recognition in text is able to provide a more comprehensive understanding of emotions than single label emotion recognition. A representation of 'emotion state in text' is proposed to encompass the multidimensional emotions in text. It ensures the description in a formal way of the configurations of basic emotions as well as of the relations between them. Our method allows recognition of the emotions for the words bear indirect emotions, emotion ambiguity and multiple emotions. We further investigate the effect of word order for emotional expression by comparing the performances of bag-of-words model and sequence model for multi-label sentence emotion recognition. The experiments show that the classification results under sequence model are better than under bag-of-words model. And homogeneous Markov model showed promising results of multi-label sentence emotion recognition. This emotion recognition system is able to provide a convenient way to acquire valuable emotion information and to improve enterprise competitive ability in many aspects.

  13. Face Recognition Based on Facial Features

    Directory of Open Access Journals (Sweden)

    Muhammad Sharif

    2012-08-01

    Full Text Available Commencing from the last decade several different methods have been planned and developed in the prospect of face recognition that is one of the chief stimulating zone in the area of image processing. Face recognitions processes have various applications in the prospect of security systems and crime investigation systems. The study is basically comprised of three phases, i.e., face detection, facial features extraction and face recognition. The first phase is the face detection process where region of interest i.e., features region is extracted. The 2nd phase is features extraction. Here face features i.e., eyes, nose and lips are extracted out commencing the extracted face area. The last module is the face recognition phase which makes use of the extracted left eye for the recognition purpose by combining features of Eigenfeatures and Fisherfeatures.

  14. The improved relative entropy for face recognition

    Directory of Open Access Journals (Sweden)

    Zhang Qi Rong

    2016-01-01

    Full Text Available The relative entropy is least sensitive to noise. In this paper, we propose the improved relative entropy for face recognition (IRE. The IRE method of recognition rate is far higher than the LDA, LPP method, by experimental results on CMU PIE face database and YALE B face database.

  15. Facial Emotion Recognition in Child Psychiatry: A Systematic Review

    Science.gov (United States)

    Collin, Lisa; Bindra, Jasmeet; Raju, Monika; Gillberg, Christopher; Minnis, Helen

    2013-01-01

    This review focuses on facial affect (emotion) recognition in children and adolescents with psychiatric disorders other than autism. A systematic search, using PRISMA guidelines, was conducted to identify original articles published prior to October 2011 pertaining to face recognition tasks in case-control studies. Used in the qualitative…

  16. Facial Emotion Recognition in Child Psychiatry: A Systematic Review

    Science.gov (United States)

    Collin, Lisa; Bindra, Jasmeet; Raju, Monika; Gillberg, Christopher; Minnis, Helen

    2013-01-01

    This review focuses on facial affect (emotion) recognition in children and adolescents with psychiatric disorders other than autism. A systematic search, using PRISMA guidelines, was conducted to identify original articles published prior to October 2011 pertaining to face recognition tasks in case-control studies. Used in the qualitative…

  17. Aging and emotion recognition: not just a losing matter.

    Science.gov (United States)

    Sze, Jocelyn A; Goodkind, Madeleine S; Gyurak, Anett; Levenson, Robert W

    2012-12-01

    Past studies on emotion recognition and aging have found evidence of age-related decline when emotion recognition was assessed by having participants detect single emotions depicted in static images of full or partial (e.g., eye region) faces. These tests afford good experimental control but do not capture the dynamic nature of real-world emotion recognition, which is often characterized by continuous emotional judgments and dynamic multimodal stimuli. Research suggests that older adults often perform better under conditions that better mimic real-world social contexts. We assessed emotion recognition in young, middle-aged, and older adults using two traditional methods (single emotion judgments of static images of faces and eyes) and an additional method in which participants made continuous emotion judgments of dynamic, multimodal stimuli (videotaped interactions between young, middle-aged, and older couples). Results revealed an Age × Test interaction. Largely consistent with prior research, we found some evidence that older adults performed worse than young adults when judging single emotions from images of faces (for sad and disgust faces only) and eyes (for older eyes only), with middle-aged adults falling in between. In contrast, older adults did better than young adults on the test involving continuous emotion judgments of dyadic interactions, with middle-aged adults falling in between. In tests in which target stimuli differed in age, emotion recognition was not facilitated by an age match between participant and target. These findings are discussed in terms of theoretical and methodological implications for the study of aging and emotional processing.

  18. Emotional signals from faces, bodies and scenes influence observers' face expressions, fixations and pupil-size

    NARCIS (Netherlands)

    Kret, M.E.; Roelofs, K.; Stekelenburg, J.J.; de Gelder, B.

    2013-01-01

    We receive emotional signals from different sources, including the face, the whole body, and the natural scene. Previous research has shown the importance of context provided by the whole body and the scene on the recognition of facial expressions. This study measured physiological responses to face

  19. Bayesian Face Recognition and Perceptual Narrowing in Face-Space

    Science.gov (United States)

    Balas, Benjamin

    2012-01-01

    During the first year of life, infants' face recognition abilities are subject to "perceptual narrowing", the end result of which is that observers lose the ability to distinguish previously discriminable faces (e.g. other-race faces) from one another. Perceptual narrowing has been reported for faces of different species and different races, in…

  20. Face Recognition in Real-world Images

    OpenAIRE

    Fontaine, Xavier; Achanta, Radhakrishna; Süsstrunk, Sabine

    2017-01-01

    Face recognition systems are designed to handle well-aligned images captured under controlled situations. However real-world images present varying orientations, expressions, and illumination conditions. Traditional face recognition algorithms perform poorly on such images. In this paper we present a method for face recognition adapted to real-world conditions that can be trained using very few training examples and is computationally efficient. Our method consists of performing a novel align...

  1. Efficient Facial Expression and Face Recognition using Ranking Method

    Directory of Open Access Journals (Sweden)

    Murali Krishna kanala

    2015-06-01

    Full Text Available Expression detection is useful as a non-invasive method of lie detection and behaviour prediction. However, these facial expressions may be difficult to detect to the untrained eye. In this paper we implements facial expression recognition techniques using Ranking Method. The human face plays an important role in our social interaction, conveying people's identity. Using human face as a key to security, the biometrics face recognition technology has received significant attention in the past several years. Experiments are performed using standard database like surprise, sad and happiness. The universally accepted three principal emotions to be recognized are: surprise, sad and happiness along with neutral.

  2. Recognition of facial and musical emotions in Parkinson's disease.

    Science.gov (United States)

    Saenz, A; Doé de Maindreville, A; Henry, A; de Labbey, S; Bakchine, S; Ehrlé, N

    2013-03-01

    Patients with amygdala lesions were found to be impaired in recognizing the fear emotion both from face and from music. In patients with Parkinson's disease (PD), impairment in recognition of emotions from facial expressions was reported for disgust, fear, sadness and anger, but no studies had yet investigated this population for the recognition of emotions from both face and music. The ability to recognize basic universal emotions (fear, happiness and sadness) from both face and music was investigated in 24 medicated patients with PD and 24 healthy controls. The patient group was tested for language (verbal fluency tasks), memory (digit and spatial span), executive functions (Similarities and Picture Completion subtests of the WAIS III, Brixton and Stroop tests), visual attention (Bells test), and fulfilled self-assessment tests for anxiety and depression. Results showed that the PD group was significantly impaired for recognition of both fear and sadness emotions from facial expressions, whereas their performance in recognition of emotions from musical excerpts was not different from that of the control group. The scores of fear and sadness recognition from faces were neither correlated to scores in tests for executive and cognitive functions, nor to scores in self-assessment scales. We attributed the observed dissociation to the modality (visual vs. auditory) of presentation and to the ecological value of the musical stimuli that we used. We discuss the relevance of our findings for the care of patients with PD. © 2012 The Author(s) European Journal of Neurology © 2012 EFNS.

  3. Traditional facial tattoos disrupt face recognition processes.

    Science.gov (United States)

    Buttle, Heather; East, Julie

    2010-01-01

    Factors that are important to successful face recognition, such as features, configuration, and pigmentation/reflectance, are all subject to change when a face has been engraved with ink markings. Here we show that the application of facial tattoos, in the form of spiral patterns (typically associated with the Maori tradition of a Moko), disrupts face recognition to a similar extent as face inversion, with recognition accuracy little better than chance performance (2AFC). These results indicate that facial tattoos can severely disrupt our ability to recognise a face that previously did not have the pattern.

  4. A new look at emotion perception: Concepts speed and shape facial emotion recognition.

    Science.gov (United States)

    Nook, Erik C; Lindquist, Kristen A; Zaki, Jamil

    2015-10-01

    Decades ago, the "New Look" movement challenged how scientists thought about vision by suggesting that conceptual processes shape visual perceptions. Currently, affective scientists are likewise debating the role of concepts in emotion perception. Here, we utilized a repetition-priming paradigm in conjunction with signal detection and individual difference analyses to examine how providing emotion labels-which correspond to discrete emotion concepts-affects emotion recognition. In Study 1, pairing emotional faces with emotion labels (e.g., "sad") increased individuals' speed and sensitivity in recognizing emotions. Additionally, individuals with alexithymia-who have difficulty labeling their own emotions-struggled to recognize emotions based on visual cues alone, but not when emotion labels were provided. Study 2 replicated these findings and further demonstrated that emotion concepts can shape perceptions of facial expressions. Together, these results suggest that emotion perception involves conceptual processing. We discuss the implications of these findings for affective, social, and clinical psychology.

  5. Emotional context influences micro-expression recognition.

    Directory of Open Access Journals (Sweden)

    Ming Zhang

    Full Text Available Micro-expressions are often embedded in a flow of expressions including both neutral and other facial expressions. However, it remains unclear whether the types of facial expressions appearing before and after the micro-expression, i.e., the emotional context, influence micro-expression recognition. To address this question, the present study used a modified METT (Micro-Expression Training Tool paradigm that required participants to recognize the target micro-expressions presented briefly between two identical emotional faces. The results of Experiments 1 and 2 showed that negative context impaired the recognition of micro-expressions regardless of the duration of the target micro-expression. Stimulus-difference between the context and target micro-expression was accounted for in Experiment 3. Results showed that a context effect on micro-expression recognition persists even when the stimulus similarity between the context and target micro-expressions was controlled. Therefore, our results not only provided evidence for the context effect on micro-expression recognition but also suggested that the context effect might result from both the stimulus and valence differences.

  6. Voice Recognition in Face-Blind Patients.

    Science.gov (United States)

    Liu, Ran R; Pancaroglu, Raika; Hills, Charlotte S; Duchaine, Brad; Barton, Jason J S

    2016-04-01

    Right or bilateral anterior temporal damage can impair face recognition, but whether this is an associative variant of prosopagnosia or part of a multimodal disorder of person recognition is an unsettled question, with implications for cognitive and neuroanatomic models of person recognition. We assessed voice perception and short-term recognition of recently heard voices in 10 subjects with impaired face recognition acquired after cerebral lesions. All 4 subjects with apperceptive prosopagnosia due to lesions limited to fusiform cortex had intact voice discrimination and recognition. One subject with bilateral fusiform and anterior temporal lesions had a combined apperceptive prosopagnosia and apperceptive phonagnosia, the first such described case. Deficits indicating a multimodal syndrome of person recognition were found only in 2 subjects with bilateral anterior temporal lesions. All 3 subjects with right anterior temporal lesions had normal voice perception and recognition, 2 of whom performed normally on perceptual discrimination of faces. This confirms that such lesions can cause a modality-specific associative prosopagnosia.

  7. Faces affect recognition in schizophrenia [Rozpoznawanie emocjonalnej ekspresji mimicznej przez osoby chore na schizofrenię

    Directory of Open Access Journals (Sweden)

    Prochwicz, Katarzyna

    2012-12-01

    Full Text Available Clinical observations and the results of many experimental researches indicate that individuals suffering from schizophrenia reveal difficulties in the recognition of emotional states experienced by other people; however the causes and the range of these problems have not been clearly described. Despite early research results confirming that difficulties in emotion recognition are related only to negative emotions, the results of the researches conducted over the lat 30 years indicate that emotion recognition problems are a manifestation of a general cognitive deficit, and they do not concern specific emotions.The article contains a review of the research on face affect recognition in schizophrenia. It discusses the causes of these difficulties, the differences in the accuracy of the recognition of specific emotions, the relationship between the symptoms of schizophrenia and the severity of problems with face perception, and the types of cognitive processes which influence the disturbances in face affect recognition. Particular attention was paid to the methodology of the research on face affect recognition, including the methods used in control tasks relying on the identification of neutral faces designed to assess the range of deficit underlying the face affect recognition problems. The analysis of methods used in particular researches revealed some weaknesses. The article also deals with the question of the possibilities of improving the ability to recognise the emotions, and briefly discusses the efficiency of emotion recognition training programs designed for patients suffering from schizophrenia.

  8. Categorical Perception of emotional faces is not affected by aging

    Directory of Open Access Journals (Sweden)

    Mandy Rossignol

    2009-11-01

    Full Text Available Effects of normal aging on categorical perception (CP of facial emotional expressions were investigated. One-hundred healthy participants (20 to 70 years old; five age groups had to identify morphed expressions ranging from neutrality to happiness, sadness and fear. We analysed percentages and latencies of correct recognition for nonmorphed emotional expressions, percentages and latencies of emotional recognition for morphed-faces, locus of the boundaries along the different continua and the number of intrusions. The results showed that unmorphed happy and fearful faces were better processed than unmorphed sad and neutral faces. For morphed faces, CP was confirmed, as latencies increased as a function of the distance between the displayed morph and the original unmorphed photograph. The locus of categorical boundaries was not affected by age. Aging did not alter the accuracy of recognition for original pictures, no more than the emotional recognition of morphed faces or the rate of intrusions. However, latencies of responses increased with age, for both unmorphed and morphed pictures. In conclusion, CP of facial expressions appears to be spared in aging.

  9. Facial, vocal and musical emotion recognition is altered in paranoid schizophrenic patients.

    Science.gov (United States)

    Weisgerber, Anne; Vermeulen, Nicolas; Peretz, Isabelle; Samson, Séverine; Philippot, Pierre; Maurage, Pierre; De Graeuwe D'Aoust, Catherine; De Jaegere, Aline; Delatte, Benoît; Gillain, Benoît; De Longueville, Xavier; Constant, Eric

    2015-09-30

    Disturbed processing of emotional faces and voices is typically observed in schizophrenia. This deficit leads to impaired social cognition and interactions. In this study, we investigated whether impaired processing of emotions also affects musical stimuli, which are widely present in daily life and known for their emotional impact. Thirty schizophrenic patients and 30 matched healthy controls evaluated the emotional content of musical, vocal and facial stimuli. Schizophrenic patients are less accurate than healthy controls in recognizing emotion in music, voices and faces. Our results confirm impaired recognition of emotion in voice and face stimuli in schizophrenic patients and extend this observation to the recognition of emotion in musical stimuli.

  10. A Multi—View Face Recognition System

    Institute of Scientific and Technical Information of China (English)

    张永越; 彭振云; 等

    1997-01-01

    In many automatic face recognition systems,posture constraining is a key factor preventing them from application.In this paper a series of strategies will be described to achieve a system which enables face recognition under varying pose.These approaches include the multi-view face modeling,the threschold image based face feature detection,the affine transformation based face posture normalization and the template matching based face identification.Combining all of these strategies,a face recognition system with the pose invariance is designed successfully,Using a 75MHZ Pentium PC and with a database of 75 individuals,15 images for each person,and 225 test images with various postures,a very good recognition rate of 96.89% is obtained.

  11. Face aftereffects predict individual differences in face recognition ability.

    Science.gov (United States)

    Dennett, Hugh W; McKone, Elinor; Edwards, Mark; Susilo, Tirta

    2012-01-01

    Face aftereffects are widely studied on the assumption that they provide a useful tool for investigating face-space coding of identity. However, a long-standing issue concerns the extent to which face aftereffects originate in face-level processes as opposed to earlier stages of visual processing. For example, some recent studies failed to find atypical face aftereffects in individuals with clinically poor face recognition. We show that in individuals within the normal range of face recognition abilities, there is an association between face memory ability and a figural face aftereffect that is argued to reflect the steepness of broadband-opponent neural response functions in underlying face-space. We further show that this correlation arises from face-level processing, by reporting results of tests of nonface memory and nonface aftereffects. We conclude that face aftereffects can tap high-level face-space, and that face-space coding differs in quality between individuals and contributes to face recognition ability.

  12. Fusing Facial Features for Face Recognition

    Directory of Open Access Journals (Sweden)

    Jamal Ahmad Dargham

    2012-06-01

    Full Text Available Face recognition is an important biometric method because of its potential applications in many fields, such as access control, surveillance, and human-computer interaction. In this paper, a face recognition system that fuses the outputs of three face recognition systems based on Gabor jets is presented. The first system uses the magnitude, the second uses the phase, and the third uses the phase-weighted magnitude of the jets. The jets are generated from facial landmarks selected using three selection methods. It was found out that fusing the facial features gives better recognition rate than either facial feature used individually regardless of the landmark selection method.

  13. Older adults' recognition of bodily and auditory expressions of emotion.

    Science.gov (United States)

    Ruffman, Ted; Sullivan, Susan; Dittrich, Winand

    2009-09-01

    This study compared young and older adults' ability to recognize bodily and auditory expressions of emotion and to match bodily and facial expressions to vocal expressions. Using emotion discrimination and matching techniques, participants assessed emotion in voices (Experiment 1), point-light displays (Experiment 2), and still photos of bodies with faces digitally erased (Experiment 3). Older adults' were worse at least some of the time in recognition of anger, sadness, fear, and happiness in bodily expressions and of anger in vocal expressions. Compared with young adults, older adults also found it more difficult to match auditory expressions to facial expressions (5 of 6 emotions) and bodily expressions (3 of 6 emotions).

  14. Discriminant Phase Component for Face Recognition

    Directory of Open Access Journals (Sweden)

    Naser Zaeri

    2012-01-01

    Full Text Available Numerous face recognition techniques have been developed owing to the growing number of real-world applications. Most of current algorithms for face recognition involve considerable amount of computations and hence they cannot be used on devices constrained with limited speed and memory. In this paper, we propose a novel solution for efficient face recognition problem for systems that utilize small memory capacities and demand fast performance. The new technique divides the face images into components and finds the discriminant phases of the Fourier transform of these components automatically using the sequential floating forward search method. A thorough study and comprehensive experiments relating time consumption versus system performance are applied to benchmark face image databases. Finally, the proposed technique is compared with other known methods and evaluated through the recognition rate and the computational time, where we achieve a recognition rate of 98.5% with computational time of 6.4 minutes for a database consisting of 2360 images.

  15. Face recognition increases during saccade preparation.

    Directory of Open Access Journals (Sweden)

    Hai Lin

    Full Text Available Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.

  16. Face recognition increases during saccade preparation.

    Science.gov (United States)

    Lin, Hai; Rizak, Joshua D; Ma, Yuan-ye; Yang, Shang-chuan; Chen, Lin; Hu, Xin-tian

    2014-01-01

    Face perception is integral to human perception system as it underlies social interactions. Saccadic eye movements are frequently made to bring interesting visual information, such as faces, onto the fovea for detailed processing. Just before eye movement onset, the processing of some basic features, such as the orientation, of an object improves at the saccade landing point. Interestingly, there is also evidence that indicates faces are processed in early visual processing stages similar to basic features. However, it is not known whether this early enhancement of processing includes face recognition. In this study, three experiments were performed to map the timing of face presentation to the beginning of the eye movement in order to evaluate pre-saccadic face recognition. Faces were found to be similarly processed as simple objects immediately prior to saccadic movements. Starting ∼ 120 ms before a saccade to a target face, independent of whether or not the face was surrounded by other faces, the face recognition gradually improved and the critical spacing of the crowding decreased as saccade onset was approaching. These results suggest that an upcoming saccade prepares the visual system for new information about faces at the saccade landing site and may reduce the background in a crowd to target the intended face. This indicates an important role of pre-saccadic eye movement signals in human face recognition.

  17. Real Time Implementation Of Face Recognition System

    Directory of Open Access Journals (Sweden)

    Megha Manchanda

    2014-10-01

    Full Text Available This paper proposes face recognition method using PCA for real time implementation. Nowadays security is gaining importance as it is becoming necessary for people to keep passwords in their mind and carry cards. Such implementations however, are becoming less secure and practical, also is becoming more problematic thus leading to an increasing interest in techniques related to biometrics systems. Face recognition system is amongst important subjects in biometrics systems. This system is very useful for security in particular and has been widely used and developed in many countries. This study aims to achieve face recognition successfully by detecting human face in real time, based on Principal Component Analysis (PCA algorithm.

  18. Face Recognition Using Kernel Discriminant Analysis

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Linear Discrimiant Analysis (LDA) has demonstrated their success in face recognition. But LDA is difficult to handle the high nonlinear problems, such as changes of large viewpoint and illumination in face recognition. In order to overcome these problems, we investigate Kernel Discriminant Analysis (KDA) for face recognition. This approach adopts the kernel functions to replace the dot products of nonlinear mapping in the high dimensional feature space, and then the nonlinear problem can be solved in the input space conveniently without explicit mapping. Two face databases are used to test KDA approach. The results show that our approach outperforms the conventional PCA(Eigenface) and LDA(Fisherface) approaches.

  19. Violent video game play impacts facial emotion recognition.

    Science.gov (United States)

    Kirsh, Steven J; Mounts, Jeffrey R W

    2007-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent video game play. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph. Typically, happy faces are identified faster than angry faces (the happy-face advantage). Results indicated that playing a violent video game led to a reduction in the happy face advantage. Implications of these findings are discussed with respect to the current models of aggressive behavior.

  20. Robust video foreground segmentation and face recognition

    Institute of Scientific and Technical Information of China (English)

    GUAN Ye-peng

    2009-01-01

    Face recognition provides a natural visual interface for human computer interaction (HCI) applications.The process of face recognition,however,is inhibited by variations in the appearance of face images caused by changes in lighting,expression,viewpoint,aging and introduction of occlusion.Although various algorithms have been presented for face recognition,face recognition is still a very challenging topic.A novel approach of real time face recognition for HCI is proposed in the paper.In view of the limits of the popular approaches to foreground segmentation,wavelet multi-scale transform based background subtraction is developed to extract foreground objects.The optimal selection of the threshold is automatically determined,which does not require any complex supervised training or manual experimental calibration.A robust real time face recognition algorithm is presented,which combines the projection matrixes without iteration and kernel Fisher discriminant analysis (KFDA) to overcome some difficulties existing in the real face recognition.Superior performance of the proposed algorithm is demonstrated by comparing with other algorithms through experiments.The proposed algorithm can also be applied to the video image sequences of natural HCI.

  1. DWT BASED HMM FOR FACE RECOGNITION

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A novel Discrete Wavelet Transform (DWT) based Hidden Markov Module (HMM) for face recognition is presented in this letter. To improve the accuracy of HMM based face recognition algorithm, DWT is used to replace Discrete Cosine Transform (DCT) for observation sequence extraction. Extensive experiments are conducted on two public databases and the results show that the proposed method can improve the accuracy significantly, especially when the face database is large and only few training images are available.

  2. Age Dependent Face Recognition using Eigenface

    Directory of Open Access Journals (Sweden)

    Hlaing Htake Khaung Tin

    2013-10-01

    Full Text Available Face recognition is the most successful form of human surveillance. Face recognition technology, is being used to improve human efficiency when recognition faces, is one of the fastest growing fields in the biometric industry. In the first stage, the age is classified into eleven categories which distinguish the person oldness in terms of age. In the second stage of the process is face recognition based on the predicted age. Age prediction has considerable potential applications in human computer interaction and multimedia communication. In this paper proposes an Eigen based age estimation algorithm for estimate an image from the database. Eigenface has proven to be a useful and robust cue for age prediction, age simulation, face recognition, localization and tracking. The scheme is based on an information theory approach that decomposes face images into a small set of characteristic feature images called eigenfaces, which may be thought of as the principal components of the initial training set of face images. The eigenface approach used in this scheme has advantages over other face recognition methods in its speed, simplicity, learning capability and robustness to small changes in the face image.

  3. Face recognition system and method using face pattern words and face pattern bytes

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Yufeng

    2014-12-23

    The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.

  4. Extraversion predicts individual differences in face recognition.

    Science.gov (United States)

    Li, Jingguang; Tian, Moqian; Fang, Huizhen; Xu, Miao; Li, He; Liu, Jia

    2010-07-01

    In daily life, one of the most common social tasks we perform is to recognize faces. However, the relation between face recognition ability and social activities is largely unknown. Here we ask whether individuals with better social skills are also better at recognizing faces. We found that extraverts who have better social skills correctly recognized more faces than introverts. However, this advantage was absent when extraverts were asked to recognize non-social stimuli (e.g., flowers). In particular, the underlying facet that makes extraverts better face recognizers is the gregariousness facet that measures the degree of inter-personal interaction. In addition, the link between extraversion and face recognition ability was independent of general cognitive abilities. These findings provide the first evidence that links face recognition ability to our daily activity in social communication, supporting the hypothesis that extraverts are better at decoding social information than introverts.

  5. Contextual modulation of biases in face recognition.

    Directory of Open Access Journals (Sweden)

    Fatima Maria Felisberti

    Full Text Available BACKGROUND: The ability to recognize the faces of potential cooperators and cheaters is fundamental to social exchanges, given that cooperation for mutual benefit is expected. Studies addressing biases in face recognition have so far proved inconclusive, with reports of biases towards faces of cheaters, biases towards faces of cooperators, or no biases at all. This study attempts to uncover possible causes underlying such discrepancies. METHODOLOGY AND FINDINGS: Four experiments were designed to investigate biases in face recognition during social exchanges when behavioral descriptors (prosocial, antisocial or neutral embedded in different scenarios were tagged to faces during memorization. Face recognition, measured as accuracy and response latency, was tested with modified yes-no, forced-choice and recall tasks (N = 174. An enhanced recognition of faces tagged with prosocial descriptors was observed when the encoding scenario involved financial transactions and the rules of the social contract were not explicit (experiments 1 and 2. Such bias was eliminated or attenuated by making participants explicitly aware of "cooperative", "cheating" and "neutral/indifferent" behaviors via a pre-test questionnaire and then adding such tags to behavioral descriptors (experiment 3. Further, in a social judgment scenario with descriptors of salient moral behaviors, recognition of antisocial and prosocial faces was similar, but significantly better than neutral faces (experiment 4. CONCLUSION: The results highlight the relevance of descriptors and scenarios of social exchange in face recognition, when the frequency of prosocial and antisocial individuals in a group is similar. Recognition biases towards prosocial faces emerged when descriptors did not state the rules of a social contract or the moral status of a behavior, and they point to the existence of broad and flexible cognitive abilities finely tuned to minor changes in social context.

  6. The Moving Window Technique: A Window into Developmental Changes in Attention during Facial Emotion Recognition

    Science.gov (United States)

    Birmingham, Elina; Meixner, Tamara; Iarocci, Grace; Kanan, Christopher; Smilek, Daniel; Tanaka, James W.

    2013-01-01

    The strategies children employ to selectively attend to different parts of the face may reflect important developmental changes in facial emotion recognition. Using the Moving Window Technique (MWT), children aged 5-12 years and adults ("N" = 129) explored faces with a mouse-controlled window in an emotion recognition task. An…

  7. The Moving Window Technique: A Window into Developmental Changes in Attention during Facial Emotion Recognition

    Science.gov (United States)

    Birmingham, Elina; Meixner, Tamara; Iarocci, Grace; Kanan, Christopher; Smilek, Daniel; Tanaka, James W.

    2013-01-01

    The strategies children employ to selectively attend to different parts of the face may reflect important developmental changes in facial emotion recognition. Using the Moving Window Technique (MWT), children aged 5-12 years and adults ("N" = 129) explored faces with a mouse-controlled window in an emotion recognition task. An…

  8. Emotion Recognition and Visual-Scan Paths in Fragile X Syndrome

    Science.gov (United States)

    Shaw, Tracey A.; Porter, Melanie A.

    2013-01-01

    This study investigated emotion recognition abilities and visual scanning of emotional faces in 16 Fragile X syndrome (FXS) individuals compared to 16 chronological-age and 16 mental-age matched controls. The relationships between emotion recognition, visual scan-paths and symptoms of social anxiety, schizotypy and autism were also explored.…

  9. Recovery from emotion recognition impairment after temporal lobectomy

    Directory of Open Access Journals (Sweden)

    Francesca eBenuzzi

    2014-06-01

    Full Text Available Mesial temporal lobe epilepsy (MTLE can be associated with emotion recognition impairment that can be particularly severe in patients with early onset seizures (1-3. Whereas there is growing evidence that memory and language can improve in seizure-free patients after anterior temporal lobectomy (ATL (4, the effects of surgery on emotional processing are still unknown. We used functional magnetic resonance imaging (fMRI to investigate short-term reorganization of networks engaged in facial emotion recognition in MTLE patients. Behavioral and fMRI data were collected from six patients before and after ATL. During the fMRI scan, patients were asked to make a gender decision on fearful and neutral faces. Behavioral data demonstrated that two patients with early-onset right MTLE were impaired in fear recognition while fMRI results showed they lacked specific activations for fearful faces. Post-ATL behavioral data showed improved emotion recognition ability, while fMRI demonstrated the recruitment of a functional network for fearful face processing. Our results suggest that ATL elicited brain plasticity mechanisms allowing behavioral and fMRI improvement in emotion recognition.

  10. Recovery from Emotion Recognition Impairment after Temporal Lobectomy

    Science.gov (United States)

    Benuzzi, Francesca; Zamboni, Giovanna; Meletti, Stefano; Serafini, Marco; Lui, Fausta; Baraldi, Patrizia; Duzzi, Davide; Rubboli, Guido; Tassinari, Carlo Alberto; Nichelli, Paolo Frigio

    2014-01-01

    Mesial temporal lobe epilepsy (MTLE) can be associated with emotion recognition impairment that can be particularly severe in patients with early onset seizures (1–3). Whereas, there is growing evidence that memory and language can improve in seizure-free patients after anterior temporal lobectomy (ATL) (4), the effects of surgery on emotional processing are still unknown. We used functional magnetic resonance imaging (fMRI) to investigate short-term reorganization of networks engaged in facial emotion recognition in MTLE patients. Behavioral and fMRI data were collected from six patients before and after ATL. During the fMRI scan, patients were asked to make a gender decision on fearful and neutral faces. Behavioral data demonstrated that two patients with early onset right MTLE were impaired in fear recognition while fMRI results showed they lacked specific activations for fearful faces. Post-ATL behavioral data showed improved emotion recognition ability, while fMRI demonstrated the recruitment of a functional network for fearful face processing. Our results suggest that ATL elicited brain plasticity mechanisms allowing behavioral and fMRI improvement in emotion recognition. PMID:24936197

  11. Changes in social emotion recognition following traumatic frontal lobe injury

    Institute of Scientific and Technical Information of China (English)

    Ana Teresa Martins; Luis Faísca; Francisco Esteves; Cláudia Sim(a)o; Mariline Gomes Justo; Angélica Muresan; Alexandra Reis

    2012-01-01

    Changes in social and emotional behaviour have been consistently observed in patients with traumatic brain injury. These changes are associated with emotion recognition deficits which represent one of the major barriers to a successful familiar and social reintegration. In the present study, 32 patients with traumatic brain injury, involving the frontal lobe, and 41 age- and education-matched healthy controls were analyzed. A Go/No-Go task was designed, where each participant had to recognize faces representing three social emotions (arrogance, guilt and jealousy). Results suggested that ability to recognize two social emotions (arrogance and jealousy) was significantly reduced in patients with traumatic brain injury, indicating frontal lesion can reduce emotion recognition ability. In addition, the analysis of the results for hemispheric lesion location (right, left or bilateral) suggested the bilateral lesion sub-group showed a lower accuracy on all social emotions.

  12. Degraded Impairment of Emotion Recognition in Parkinson's Disease Extends from Negative to Positive Emotions.

    Science.gov (United States)

    Lin, Chia-Yao; Tien, Yi-Min; Huang, Jong-Tsun; Tsai, Chon-Haw; Hsu, Li-Chuan

    2016-01-01

    Because of dopaminergic neurodegeneration, patients with Parkinson's disease (PD) show impairment in the recognition of negative facial expressions. In the present study, we aimed to determine whether PD patients with more advanced motor problems would show a much greater deficit in recognition of emotional facial expressions than a control group and whether impairment of emotion recognition would extend to positive emotions. Twenty-nine PD patients and 29 age-matched healthy controls were recruited. Participants were asked to discriminate emotions in Experiment  1 and identify gender in Experiment  2. In Experiment  1, PD patients demonstrated a recognition deficit for negative (sadness and anger) and positive faces. Further analysis showed that only PD patients with high motor dysfunction performed poorly in recognition of happy faces. In Experiment  2, PD patients showed an intact ability for gender identification, and the results eliminated possible abilities in the functions measured in Experiment  2 as alternative explanations for the results of Experiment  1. We concluded that patients' ability to recognize emotions deteriorated as the disease progressed. Recognition of negative emotions was impaired first, and then the impairment extended to positive emotions.

  13. Degraded Impairment of Emotion Recognition in Parkinson’s Disease Extends from Negative to Positive Emotions

    Directory of Open Access Journals (Sweden)

    Chia-Yao Lin

    2016-01-01

    Full Text Available Because of dopaminergic neurodegeneration, patients with Parkinson’s disease (PD show impairment in the recognition of negative facial expressions. In the present study, we aimed to determine whether PD patients with more advanced motor problems would show a much greater deficit in recognition of emotional facial expressions than a control group and whether impairment of emotion recognition would extend to positive emotions. Twenty-nine PD patients and 29 age-matched healthy controls were recruited. Participants were asked to discriminate emotions in Experiment  1 and identify gender in Experiment  2. In Experiment  1, PD patients demonstrated a recognition deficit for negative (sadness and anger and positive faces. Further analysis showed that only PD patients with high motor dysfunction performed poorly in recognition of happy faces. In Experiment  2, PD patients showed an intact ability for gender identification, and the results eliminated possible abilities in the functions measured in Experiment  2 as alternative explanations for the results of Experiment  1. We concluded that patients’ ability to recognize emotions deteriorated as the disease progressed. Recognition of negative emotions was impaired first, and then the impairment extended to positive emotions.

  14. Utterance independent bimodal emotion recognition in spontaneous communication

    Science.gov (United States)

    Tao, Jianhua; Pan, Shifeng; Yang, Minghao; Li, Ya; Mu, Kaihui; Che, Jianfeng

    2011-12-01

    Emotion expressions sometimes are mixed with the utterance expression in spontaneous face-to-face communication, which makes difficulties for emotion recognition. This article introduces the methods of reducing the utterance influences in visual parameters for the audio-visual-based emotion recognition. The audio and visual channels are first combined under a Multistream Hidden Markov Model (MHMM). Then, the utterance reduction is finished by finding the residual between the real visual parameters and the outputs of the utterance related visual parameters. This article introduces the Fused Hidden Markov Model Inversion method which is trained in the neutral expressed audio-visual corpus to solve the problem. To reduce the computing complexity the inversion model is further simplified to a Gaussian Mixture Model (GMM) mapping. Compared with traditional bimodal emotion recognition methods (e.g., SVM, CART, Boosting), the utterance reduction method can give better results of emotion recognition. The experiments also show the effectiveness of our emotion recognition system when it was used in a live environment.

  15. Real-time, face recognition technology

    Energy Technology Data Exchange (ETDEWEB)

    Brady, S.

    1995-11-01

    The Institute for Scientific Computing Research (ISCR) at Lawrence Livermore National Laboratory recently developed the real-time, face recognition technology KEN. KEN uses novel imaging devices such as silicon retinas developed at Caltech or off-the-shelf CCD cameras to acquire images of a face and to compare them to a database of known faces in a robust fashion. The KEN-Online project makes that recognition technology accessible through the World Wide Web (WWW), an internet service that has recently seen explosive growth. A WWW client can submit face images, add them to the database of known faces and submit other pictures that the system tries to recognize. KEN-Online serves to evaluate the recognition technology and grow a large face database. KEN-Online includes the use of public domain tools such as mSQL for its name-database and perl scripts to assist the uploading of images.

  16. Automatic, Dimensional and Continuous Emotion Recognition

    NARCIS (Netherlands)

    Gunes, Hatice; Pantic, Maja; Vallverdú, J.

    2010-01-01

    Recognition and analysis of human emotions have attracted a lot of interest in the past two decades and have been researched extensively in neuroscience, psychology, cognitive sciences, and computer sciences. Most of the past research in machine analysis of human emotion has focused on recognition o

  17. An Introduction to Face Recognition Technology

    Directory of Open Access Journals (Sweden)

    Shang-Hung Lin

    2000-01-01

    Full Text Available Recently face recognition is attracting much attention in the society of network multimedia information access.  Areas such as network security, content indexing and retrieval, and video compression benefits from face recognition technology because "people" are the center of attention in a lot of video.  Network access control via face recognition not only makes hackers virtually impossible to steal one's "password", but also increases the user-friendliness in human-computer interaction.  Indexing and/or retrieving video data based on the appearances of particular persons will be useful for users such as news reporters, political scientists, and moviegoers.  For the applications of videophone and teleconferencing, the assistance of face recognition also provides a more efficient coding scheme.  In this paper, we give an introductory course of this new information processing technology.  The paper shows the readers the generic framework for the face recognition system, and the variants that are frequently encountered by the face recognizer.  Several famous face recognition algorithms, such as eigenfaces and neural networks, will also be explained.

  18. Is facial emotion recognition impairment in schizophrenia identical for different emotions? A signal detection analysis.

    Science.gov (United States)

    Tsoi, Daniel T; Lee, Kwang-Hyuk; Khokhar, Waqqas A; Mir, Nusrat U; Swalli, Jaspal S; Gee, Kate A; Pluck, Graham; Woodruff, Peter W R

    2008-02-01

    Patients with schizophrenia have difficulty recognising the emotion that corresponds to a given facial expression. According to signal detection theory, two separate processes are involved in facial emotion perception: a sensory process (measured by sensitivity which is the ability to distinguish one facial emotion from another facial emotion) and a cognitive decision process (measured by response criterion which is the tendency to judge a facial emotion as a particular emotion). It is uncertain whether facial emotion recognition deficits in schizophrenia are primarily due to impaired sensitivity or response bias. In this study, we hypothesised that individuals with schizophrenia would have both diminished sensitivity and different response criteria in facial emotion recognition across different emotions compared with healthy controls. Twenty-five individuals with a DSM-IV diagnosis of schizophrenia were compared with age and IQ matched healthy controls. Participants performed a "yes-no" task by indicating whether the 88 Ekman faces shown briefly expressed one of the target emotions in three randomly ordered runs (happy, sad and fear). Sensitivity and response criteria for facial emotion recognition was calculated as d-prime and In(beta) respectively using signal detection theory. Patients with schizophrenia showed diminished sensitivity (d-prime) in recognising happy faces, but not faces that expressed fear or sadness. By contrast, patients exhibited a significantly less strict response criteria (In(beta)) in recognising fearful and sad faces. Our results suggest that patients with schizophrenia have a specific deficit in recognising happy faces, whereas they were more inclined to attribute any facial emotion as fearful or sad.

  19. Exemplar-based Face Recognition from Video

    DEFF Research Database (Denmark)

    Krüger, Volker; Zhou, Shaohua; Chellappa, Rama

    2005-01-01

    -temporal relations: This allows the system to use dynamics as well as to generate warnings when 'implausible' situations occur or to circumvent these altogether. We have studied the effectiveness of temporal integration for recognition purposes by using the face recognition as an example problem. Face recognition...... is a prominent problem and has been studied more extensively than almost any other recognition problem. An observation is that face recognition works well in ideal conditions. If those conditions, however, are not met, then all present algorithms break down disgracefully. This probelm appears to be general...... to all vision techniques that intend to extract visual information out of a low snr. image. It is exactly a strength of cognitive systems that they are able to cope with non-ideal situations. In this chapter we will present a techniques that allows to integrate visual information over time and we...

  20. How fast is famous face recognition?

    Directory of Open Access Journals (Sweden)

    Gladys eBarragan-Jason

    2012-10-01

    Full Text Available The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to fast visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces, a superordinate categorization task (human faces among animal ones and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail.

  1. Emotional signals from faces, bodies and scenes influence observers' face expressions, fixations and pupil-size

    NARCIS (Netherlands)

    Kret, M.E.; Roelofs, K.; Stekelenburg, J.J.; de Gelder, B.

    2013-01-01

    We receive emotional signals from different sources, including the face, the whole body, and the natural scene. Previous research has shown the importance of context provided by the whole body and the scene on the recognition of facial expressions. This study measured physiological responses to

  2. Emotion recognition and regulation in anorexia nervosa.

    Science.gov (United States)

    Harrison, Amy; Sullivan, Sarah; Tchanturia, Kate; Treasure, Janet

    2009-01-01

    It is recognized that emotional problems lie at the core of eating disorders (EDs) but scant attention has been paid to specific aspects such as emotional recognition, regulation and expression. This study aimed to investigate emotion recognition using the Reading the Mind in the Eyes (RME) task and emotion regulation using the Difficulties in Emotion Regulation Scale (DERS) in 20 women with anorexia nervosa (AN) and 20 female healthy controls (HCs). Women with AN had significantly lower scores on RME and reported significantly more difficulties with emotion regulation than HCs. There was a significant negative correlation between total DERS score and correct answers from the RME. These results suggest that women with AN have difficulties with emotional recognition and regulation. It is uncertain whether these deficits result from starvation and to what extent they might be reversed by weight gain alone. These deficits may need to be targeted in treatment.

  3. Novel acoustic features for speech emotion recognition

    Institute of Scientific and Technical Information of China (English)

    ROH Yong-Wan; KIM Dong-Ju; LEE Woo-Seok; HONG Kwang-Seok

    2009-01-01

    This paper focuses on acoustic features that effectively improve the recognition of emotion in human speech. The novel features in this paper are based on spectral-based entropy parameters such as fast Fourier transform (FFT) spectral entropy, delta FFT spectral entropy, Mel-frequency filter bank (MFB)spectral entropy, and Delta MFB spectral entropy. Spectral-based entropy features are simple. They reflect frequency characteristic and changing characteristic in frequency of speech. We implement an emotion rejection module using the probability distribution of recognized-scores and rejected-scores.This reduces the false recognition rate to improve overall performance. Recognized-scores and rejected-scores refer to probabilities of recognized and rejected emotion recognition results, respectively.These scores are first obtained from a pattern recognition procedure. The pattern recognition phase uses the Gaussian mixture model (GMM). We classify the four emotional states as anger, sadness,happiness and neutrality. The proposed method is evaluated using 45 sentences in each emotion for 30 subjects, 15 males and 15 females. Experimental results show that the proposed method is superior to the existing emotion recognition methods based on GMM using energy, Zero Crossing Rate (ZCR),linear prediction coefficient (LPC), and pitch parameters. We demonstrate the effectiveness of the proposed approach. One of the proposed features, combined MFB and delta MFB spectral entropy improves performance approximately 10% compared to the existing feature parameters for speech emotion recognition methods. We demonstrate a 4% performance improvement in the applied emotion rejection with low confidence score.

  4. Face Recognition using Eigenfaces and Neural Networks

    Directory of Open Access Journals (Sweden)

    Mohamed Rizon

    2006-01-01

    Full Text Available In this study, we develop a computational model to identify the face of an unknown person’s by applying eigenfaces. The eigenfaces has been applied to extract the basic face of the human face images. The eigenfaces is then projecting onto human faces to identify unique features vectors. This significant features vector can be used to identify an unknown face by using the backpropagation neural network that utilized euclidean distance for classification and recognition. The ORL database for this investigation consists of 40 people with various 400 face images had been used for the learning. The eigenfaces including implemented Jacobi’s method for eigenvalues and eigenvectors has been performed. The classification and recognition using backpropagation neural network showed impressive positive result to classify face images.

  5. Impaired facial emotion recognition in a ketamine model of psychosis.

    Science.gov (United States)

    Ebert, Andreas; Haussleiter, Ida Sibylle; Juckel, Georg; Brüne, Martin; Roser, Patrik

    2012-12-30

    Social cognitive disabilities are a common feature in schizophrenia. Given the role of glutamatergic neurotransmission in schizophrenia-related cognitive impairments, we investigated the effects of the glutamatergic NMDA receptor antagonist ketamine on facial emotion recognition. Eighteen healthy male subjects were tested on two occasions, one without medication and one after administration with subanesthetic doses of intravenous ketamine. Emotion recognition was examined using the Ekman 60 Faces Test. In addition, attention was measured by the Continuous Performance Test (CPT), and psychopathology was rated using the Psychotomimetic States Inventory (PSI). Ketamine produced a non-significant deterioration of global emotion recognition abilities. Specifically, the ability to correctly identify the facial expression of sadness was significantly reduced in the ketamine condition. These results were independent of psychotic symptoms and selective attention. Our results point to the involvement of the glutamatergic system in the ability to recognize facial emotions. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  6. EMOTION RECOGNITION OF VIRTUAL AGENTS FACIAL EXPRESSIONS: THE EFFECTS OF AGE AND EMOTION INTENSITY

    Science.gov (United States)

    Beer, Jenay M.; Fisk, Arthur D.; Rogers, Wendy A.

    2014-01-01

    People make determinations about the social characteristics of an agent (e.g., robot or virtual agent) by interpreting social cues displayed by the agent, such as facial expressions. Although a considerable amount of research has been conducted investigating age-related differences in emotion recognition of human faces (e.g., Sullivan, & Ruffman, 2004), the effect of age on emotion identification of virtual agent facial expressions has been largely unexplored. Age-related differences in emotion recognition of facial expressions are an important factor to consider in the design of agents that may assist older adults in a recreational or healthcare setting. The purpose of the current research was to investigate whether age-related differences in facial emotion recognition can extend to emotion-expressive virtual agents. Younger and older adults performed a recognition task with a virtual agent expressing six basic emotions. Larger age-related differences were expected for virtual agents displaying negative emotions, such as anger, sadness, and fear. In fact, the results indicated that older adults showed a decrease in emotion recognition accuracy for a virtual agent's emotions of anger, fear, and happiness. PMID:25552896

  7. A novel thermal face recognition approach using face pattern words

    Science.gov (United States)

    Zheng, Yufeng

    2010-04-01

    A reliable thermal face recognition system can enhance the national security applications such as prevention against terrorism, surveillance, monitoring and tracking, especially at nighttime. The system can be applied at airports, customs or high-alert facilities (e.g., nuclear power plant) for 24 hours a day. In this paper, we propose a novel face recognition approach utilizing thermal (long wave infrared) face images that can automatically identify a subject at both daytime and nighttime. With a properly acquired thermal image (as a query image) in monitoring zone, the following processes will be employed: normalization and denoising, face detection, face alignment, face masking, Gabor wavelet transform, face pattern words (FPWs) creation, face identification by similarity measure (Hamming distance). If eyeglasses are present on a subject's face, an eyeglasses mask will be automatically extracted from the querying face image, and then masked with all comparing FPWs (no more transforms). A high identification rate (97.44% with Top-1 match) has been achieved upon our preliminary face dataset (of 39 subjects) from the proposed approach regardless operating time and glasses-wearing condition.e

  8. Facial emotional recognition in schizophrenia: preliminary results of the virtual reality program for facial emotional recognition

    Directory of Open Access Journals (Sweden)

    Teresa Souto

    2013-01-01

    Full Text Available BACKGROUND: Significant deficits in emotional recognition and social perception characterize patients with schizophrenia and have direct negative impact both in inter-personal relationships and in social functioning. Virtual reality, as a methodological resource, might have a high potential for assessment and training skills in people suffering from mental illness. OBJECTIVES: To present preliminary results of a facial emotional recognition assessment designed for patients with schizophrenia, using 3D avatars and virtual reality. METHODS: Presentation of 3D avatars which reproduce images developed with the FaceGen® software and integrated in a three-dimensional virtual environment. Each avatar was presented to a group of 12 patients with schizophrenia and a reference group of 12 subjects without psychiatric pathology. RESULTS: The results show that the facial emotions of happiness and anger are better recognized by both groups and that the major difficulties arise in fear and disgust recognition. Frontal alpha electroencephalography variations were found during the presentation of anger and disgust stimuli among patients with schizophrenia. DISCUSSION: The developed program evaluation module can be of surplus value both for patient and therapist, providing the task execution in a non anxiogenic environment, however similar to the actual experience.

  9. WCTFR : WRAPPING CURVELET TRANSFORM BASED FACE RECOGNITION

    Directory of Open Access Journals (Sweden)

    Arunalatha J S

    2015-03-01

    Full Text Available The recognition of a person based on biological features are efficient compared with traditional knowledge based recognition system. In this paper we propose Wrapping Curvelet Transform based Face Recognition (WCTFR. The Wrapping Curvelet Transform (WCT is applied on face images of database and test images to derive coefficients. The obtained coefficient matrix is rearranged to form WCT features of each image. The test image WCT features are compared with database images using Euclidean Distance (ED to compute Equal Error Rate (EER and True Success Rate (TSR. The proposed algorithm with WCT performs better than Curvelet Transform algorithms used in [1], [10] and [11].

  10. A change in strategy: Static emotion recognition in Malaysian Chinese

    Directory of Open Access Journals (Sweden)

    Chrystalle B.Y. Tan

    2015-12-01

    Full Text Available Studies have shown that while East Asians focused on the center of the face to recognize identities, participants adapted their strategy by focusing more on the eyes to identify emotions, suggesting that the eyes may contain salient information pertaining to emotional state in Eastern cultures. However, Western Caucasians employ the same strategy by moving between the eyes and mouth to identify both identities and emotions. Malaysian Chinese have been shown to focus on the eyes and nose more than the mouth during face recognition task, which represents an intermediate between Eastern and Western looking strategies. The current study examined whether Malaysian Chinese continue to employ an intermediate strategy or shift towards an Eastern or Western pattern (by fixating more on the eyes or mouth respectively during an emotion recognition task. Participants focused more on the eyes, followed by the nose then mouth. Directing attention towards the eye region resulted in better recognition of certain own- than other-race emotions. Although the fixation patterns appear similar for both tasks, further analyses showed that fixations on the eyes were reduced whereas fixations on the nose and mouth were increased during emotion recognition, indicating that participants adapt looking strategies based on their aims.

  11. Face recognition using facial expression: a novel approach

    Science.gov (United States)

    Singh, Deepak Kumar; Gupta, Priya; Tiwary, U. S.

    2008-04-01

    Facial expressions are undoubtedly the most effective nonverbal communication. The face has always been the equation of a person's identity. The face draws the demarcation line between identity and extinction. Each line on the face adds an attribute to the identity. These lines become prominent when we experience an emotion and these lines do not change completely with age. In this paper we have proposed a new technique for face recognition which focuses on the facial expressions of the subject to identify his face. This is a grey area on which not much light has been thrown earlier. According to earlier researches it is difficult to alter the natural expression. So our technique will be beneficial for identifying occluded or intentionally disguised faces. The test results of the experiments conducted prove that this technique will give a new direction in the field of face recognition. This technique will provide a strong base to the area of face recognition and will be used as the core method for critical defense security related issues.

  12. Face Detection and Modeling for Recognition

    Science.gov (United States)

    2002-01-01

    facial components show the important role of hair and face outlines in human face recognition. . . 8 1.6 Caricatures of (a) Vincent Van Gogh ; (b) Jim... Vincent Van Gogh ; (b) Jim Carrey; (c) Arnold Schwarzenegger; (d) Einstein; (e) G. W. Bush; and (f) Bill Gates. Images are down- loaded from [9], [10

  13. Robust Face Recognition through Local Graph Matching

    Directory of Open Access Journals (Sweden)

    Ehsan Fazl-Ersi

    2007-09-01

    Full Text Available A novel face recognition method is proposed, in which face images are represented by a set of local labeled graphs, each containing information about the appearance and geometry of a 3-tuple of face feature points, extracted using Local Feature Analysis (LFA technique. Our method automatically learns a model set and builds a graph space for each individual. A two-stage method for optimal matching between the graphs extracted from a probe image and the trained model graphs is proposed. The recognition of each probe face image is performed by assigning it to the trained individual with the maximum number of references. Our approach achieves perfect result on the ORL face set and an accuracy rate of 98.4% on the FERET face set, which shows the superiority of our method over all considered state-of-the-art methods. I

  14. Face Recognition Based on Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ali Javed

    2013-02-01

    Full Text Available The purpose of the proposed research work is to develop a computer system that can recognize a person by comparing the characteristics of face to those of known individuals. The main focus is on frontal two dimensional images that are taken in a controlled environment i.e. the illumination and the background will be constant. All the other methods of person’s identification and verification like iris scan or finger print scan require high quality and costly equipment’s but in face recognition we only require a normal camera giving us a 2-D frontal image of the person that will be used for the process of the person’s recognition. Principal Component Analysis technique has been used in the proposed system of face recognition. The purpose is to compare the results of the technique under the different conditions and to find the most efficient approach for developing a facial recognition system

  15. Face or body? Oxytocin improves perception of emotions from facial expressions in incongruent emotional body context.

    Science.gov (United States)

    Perry, Anat; Aviezer, Hillel; Goldstein, Pavel; Palgi, Sharon; Klein, Ehud; Shamay-Tsoory, Simone G

    2013-11-01

    The neuropeptide oxytocin (OT) has been repeatedly reported to play an essential role in the regulation of social cognition in humans in general, and specifically in enhancing the recognition of emotions from facial expressions. The later was assessed in different paradigms that rely primarily on isolated and decontextualized emotional faces. However, recent evidence has indicated that the perception of basic facial expressions is not context invariant and can be categorically altered by context, especially body context, at early perceptual levels. Body context has a strong effect on our perception of emotional expressions, especially when the actual target face and the contextually expected face are perceptually similar. To examine whether and how OT affects emotion recognition, we investigated the role of OT in categorizing facial expressions in incongruent body contexts. Our results show that in the combined process of deciphering emotions from facial expressions and from context, OT gives an advantage to the face. This advantage is most evident when the target face and the contextually expected face are perceptually similar.

  16. RGB-D-T based Face Recognition

    DEFF Research Database (Denmark)

    Nikisins, Olegs; Nasrollahi, Kamal; Greitans, Modris

    2014-01-01

    Facial images are of critical importance in many real-world applications from gaming to surveillance. The current literature on facial image analysis, from face detection to face and facial expression recognition, are mainly performed in either RGB, Depth (D), or both of these modalities. But......, such analyzes have rarely included Thermal (T) modality. This paper paves the way for performing such facial analyzes using synchronized RGB-D-T facial images by introducing a database of 51 persons including facial images of different rotations, illuminations, and expressions. Furthermore, a face recognition...

  17. Direct Neighborhood Discriminant Analysis for Face Recognition

    Directory of Open Access Journals (Sweden)

    Miao Cheng

    2008-01-01

    Full Text Available Face recognition is a challenging problem in computer vision and pattern recognition. Recently, many local geometrical structure-based techiniques are presented to obtain the low-dimensional representation of face images with enhanced discriminatory power. However, these methods suffer from the small simple size (SSS problem or the high computation complexity of high-dimensional data. To overcome these problems, we propose a novel local manifold structure learning method for face recognition, named direct neighborhood discriminant analysis (DNDA, which separates the nearby samples of interclass and preserves the local within-class geometry in two steps, respectively. In addition, the PCA preprocessing to reduce dimension to a large extent is not needed in DNDA avoiding loss of discriminative information. Experiments conducted on ORL, Yale, and UMIST face databases show the effectiveness of the proposed method.

  18. Ventromedial prefrontal cortex mediates visual attention during facial emotion recognition.

    Science.gov (United States)

    Wolf, Richard C; Philippi, Carissa L; Motzkin, Julian C; Baskaya, Mustafa K; Koenigs, Michael

    2014-06-01

    The ventromedial prefrontal cortex is known to play a crucial role in regulating human social and emotional behaviour, yet the precise mechanisms by which it subserves this broad function remain unclear. Whereas previous neuropsychological studies have largely focused on the role of the ventromedial prefrontal cortex in higher-order deliberative processes related to valuation and decision-making, here we test whether ventromedial prefrontal cortex may also be critical for more basic aspects of orienting attention to socially and emotionally meaningful stimuli. Using eye tracking during a test of facial emotion recognition in a sample of lesion patients, we show that bilateral ventromedial prefrontal cortex damage impairs visual attention to the eye regions of faces, particularly for fearful faces. This finding demonstrates a heretofore unrecognized function of the ventromedial prefrontal cortex-the basic attentional process of controlling eye movements to faces expressing emotion.

  19. Face-space: A unifying concept in face recognition research.

    Science.gov (United States)

    Valentine, Tim; Lewis, Michael B; Hills, Peter J

    2016-10-01

    The concept of a multidimensional psychological space, in which faces can be represented according to their perceived properties, is fundamental to the modern theorist in face processing. Yet the idea was not clearly expressed until 1991. The background that led to the development of face-space is explained, and its continuing influence on theories of face processing is discussed. Research that has explored the properties of the face-space and sought to understand caricature, including facial adaptation paradigms, is reviewed. Face-space as a theoretical framework for understanding the effect of ethnicity and the development of face recognition is evaluated. Finally, two applications of face-space in the forensic setting are discussed. From initially being presented as a model to explain distinctiveness, inversion, and the effect of ethnicity, face-space has become a central pillar in many aspects of face processing. It is currently being developed to help us understand adaptation effects with faces. While being in principle a simple concept, face-space has shaped, and continues to shape, our understanding of face perception.

  20. Processing of emotional faces in social phobia

    Directory of Open Access Journals (Sweden)

    Nicole Kristjansen Rosenberg

    2011-02-01

    Full Text Available Previous research has found that individuals with social phobia differ from controls in their processing of emotional faces. For instance, people with social phobia show increased attention to briefly presented threatening faces. However, when exposure times are increased, the direction of this attentional bias is more unclear. Studies investigating eye movements have found both increased as well as decreased attention to threatening faces in socially anxious participants. The current study investigated eye movements to emotional faces in eight patients with social phobia and 34 controls. Three different tasks with different exposure durations were used, which allowed for an investigation of the time course of attention. At the early time interval, patients showed a complex pattern of both vigilance and avoidance of threatening faces. At the longest time interval, patients avoided the eyes of sad, disgust, and neutral faces more than controls, whereas there were no group differences for angry faces.

  1. Aging and attentional biases for emotional faces.

    Science.gov (United States)

    Mather, Mara; Carstensen, Laura L

    2003-09-01

    We examined age differences in attention to and memory for faces expressing sadness, anger, and happiness. Participants saw a pair of faces, one emotional and one neutral, and then a dot probe that appeared in the location of one of the faces. In two experiments, older adults responded faster to the dot if it was presented on the same side as a neutral face than if it was presented on the same side as a negative face. Younger adults did not exhibit this attentional bias. Interactions of age and valence were also found for memory for the faces, with older adults remembering positive better than negative faces. These findings reveal that in their initial attention, older adults avoid negative information. This attentional bias is consistent with older adults' generally better emotional well-being and their tendency to remember negative less well than positive information.

  2. Robust multi-camera view face recognition

    CERN Document Server

    Kisku, Dakshina Ranjan; Gupta, Phalguni; Sing, Jamuna Kanta

    2010-01-01

    This paper presents multi-appearance fusion of Principal Component Analysis (PCA) and generalization of Linear Discriminant Analysis (LDA) for multi-camera view offline face recognition (verification) system. The generalization of LDA has been extended to establish correlations between the face classes in the transformed representation and this is called canonical covariate. The proposed system uses Gabor filter banks for characterization of facial features by spatial frequency, spatial locality and orientation to make compensate to the variations of face instances occurred due to illumination, pose and facial expression changes. Convolution of Gabor filter bank to face images produces Gabor face representations with high dimensional feature vectors. PCA and canonical covariate are then applied on the Gabor face representations to reduce the high dimensional feature spaces into low dimensional Gabor eigenfaces and Gabor canonical faces. Reduced eigenface vector and canonical face vector are fused together usi...

  3. Face Recognition With Neural Networks

    Science.gov (United States)

    1992-12-01

    Ninth Annual Cognitive Science Society Conference, Volume unknown:461-473 (1987). 8. Damasio , Antonio R. "Prosopagnosia," Trends in Neuroscience, 8:132...is also supported by the work of J. C. Meadows and A. R. Damasio in their studies of individuals who have lost the ability to recognize faces, a

  4. Assessment of Emotional Experience and Emotional Recognition in Complicated Grief

    Science.gov (United States)

    Fernández-Alcántara, Manuel; Cruz-Quintana, Francisco; Pérez-Marfil, M. N.; Catena-Martínez, Andrés; Pérez-García, Miguel; Turnbull, Oliver H.

    2016-01-01

    There is substantial evidence of bias in the processing of emotion in people with complicated grief (CG). Previous studies have tended to assess the expression of emotion in CG, but other aspects of emotion (mainly emotion recognition, and the subjective aspects of emotion) have not been addressed, despite their importance for practicing clinicians. A quasi-experimental design with two matched groups (Complicated Grief, N = 24 and Non-Complicated Grief, N = 20) was carried out. The Facial Expression of Emotion Test (emotion recognition), a set of pictures from the International Affective Picture System (subjective experience of emotion) and the Symptom Checklist 90 Revised (psychopathology) were employed. The CG group showed lower scores on the dimension of valence for specific conditions on the IAPS, related to the subjective experience of emotion. In addition, they presented higher values of psychopathology. In contrast, statistically significant results were not found for the recognition of emotion. In conclusion, from a neuropsychological point of view, the subjective aspects of emotion and psychopathology seem central in explaining the experience of those with CG. These results are clinically significant for psychotherapists and psychoanalysts working in the field of grief and loss. PMID:26903928

  5. Morphed emotional faces: Emotion detection and misinterpretation in social anxiety

    NARCIS (Netherlands)

    Heuer, K.; Lange, W.G.; Isaac, L.; Rinck, M.; Becker, E.S.

    2010-01-01

    The current study investigated detection and interpretation of emotional facial expressions in high socially anxious (HSA) individuals compared to non-anxious controls (NAC). A version of the morphed faces task was implemented to assess emotion onset perception, decoding accuracy and interpretation,

  6. Self-face recognition in social context.

    Science.gov (United States)

    Sugiura, Motoaki; Sassa, Yuko; Jeong, Hyeonjeong; Wakusawa, Keisuke; Horie, Kaoru; Sato, Shigeru; Kawashima, Ryuta

    2012-06-01

    The concept of "social self" is often described as a representation of the self-reflected in the eyes or minds of others. Although the appearance of one's own face has substantial social significance for humans, neuroimaging studies have failed to link self-face recognition and the likely neural substrate of the social self, the medial prefrontal cortex (MPFC). We assumed that the social self is recruited during self-face recognition under a rich social context where multiple other faces are available for comparison of social values. Using functional magnetic resonance imaging (fMRI), we examined the modulation of neural responses to the faces of the self and of a close friend in a social context. We identified an enhanced response in the ventral MPFC and right occipitoparietal sulcus in the social context specifically for the self-face. Neural response in the right lateral parietal and inferior temporal cortices, previously claimed as self-face-specific, was unaffected for the self-face but unexpectedly enhanced for the friend's face in the social context. Self-face-specific activation in the pars triangularis of the inferior frontal gyrus, and self-face-specific reduction of activation in the left middle temporal gyrus and the right supramarginal gyrus, replicating a previous finding, were not subject to such modulation. Our results thus demonstrated the recruitment of a social self during self-face recognition in the social context. At least three brain networks for self-face-specific activation may be dissociated by different patterns of response-modulation in the social context, suggesting multiple dynamic self-other representations in the human brain.

  7. Functional architecture of visual emotion recognition ability: A latent variable approach.

    Science.gov (United States)

    Lewis, Gary J; Lefevre, Carmen E; Young, Andrew W

    2016-05-01

    Emotion recognition has been a focus of considerable attention for several decades. However, despite this interest, the underlying structure of individual differences in emotion recognition ability has been largely overlooked and thus is poorly understood. For example, limited knowledge exists concerning whether recognition ability for one emotion (e.g., disgust) generalizes to other emotions (e.g., anger, fear). Furthermore, it is unclear whether emotion recognition ability generalizes across modalities, such that those who are good at recognizing emotions from the face, for example, are also good at identifying emotions from nonfacial cues (such as cues conveyed via the body). The primary goal of the current set of studies was to address these questions through establishing the structure of individual differences in visual emotion recognition ability. In three independent samples (Study 1: n = 640; Study 2: n = 389; Study 3: n = 303), we observed that the ability to recognize visually presented emotions is based on different sources of variation: a supramodal emotion-general factor, supramodal emotion-specific factors, and face- and within-modality emotion-specific factors. In addition, we found evidence that general intelligence and alexithymia were associated with supramodal emotion recognition ability. Autism-like traits, empathic concern, and alexithymia were independently associated with face-specific emotion recognition ability. These results (a) provide a platform for further individual differences research on emotion recognition ability, (b) indicate that differentiating levels within the architecture of emotion recognition ability is of high importance, and (c) show that the capacity to understand expressions of emotion in others is linked to broader affective and cognitive processes.

  8. Orienting to face expression during encoding improves men's recognition of own gender faces.

    Science.gov (United States)

    Fulton, Erika K; Bulluck, Megan; Hertzog, Christopher

    2015-10-01

    It is unclear why women have superior episodic memory of faces, but the benefit may be partially the result of women engaging in superior processing of facial expressions. Therefore, we hypothesized that orienting instructions to attend to facial expression at encoding would significantly improve men's memory of faces and possibly reduce gender differences. We directed 203 college students (122 women) to study 120 faces under instructions to orient to either the person's gender or their emotional expression. They later took a recognition test of these faces by either judging whether they had previously studied the same person or that person with the exact same expression; the latter test evaluated recollection of specific facial details. Orienting to facial expressions during encoding significantly improved men's recognition of own-gender faces and eliminated the advantage that women had for male faces under gender orienting instructions. Although gender differences in spontaneous strategy use when orienting to faces cannot fully account for gender differences in face recognition, orienting men to facial expression during encoding is one way to significantly improve their episodic memory for male faces.

  9. AN ILLUMINATION INVARIANT TEXTURE BASED FACE RECOGNITION

    Directory of Open Access Journals (Sweden)

    K. Meena

    2013-11-01

    Full Text Available Automatic face recognition remains an interesting but challenging computer vision open problem. Poor illumination is considered as one of the major issue, since illumination changes cause large variation in the facial features. To resolve this, illumination normalization preprocessing techniques are employed in this paper to enhance the face recognition rate. The methods such as Histogram Equalization (HE, Gamma Intensity Correction (GIC, Normalization chain and Modified Homomorphic Filtering (MHF are used for preprocessing. Owing to great success, the texture features are commonly used for face recognition. But these features are severely affected by lighting changes. Hence texture based models Local Binary Pattern (LBP, Local Derivative Pattern (LDP, Local Texture Pattern (LTP and Local Tetra Patterns (LTrPs are experimented under different lighting conditions. In this paper, illumination invariant face recognition technique is developed based on the fusion of illumination preprocessing with local texture descriptors. The performance has been evaluated using YALE B and CMU-PIE databases containing more than 1500 images. The results demonstrate that MHF based normalization gives significant improvement in recognition rate for the face images with large illumination conditions.

  10. Image Pixel Fusion for Human Face Recognition

    CERN Document Server

    Bhowmik, Mrinal Kanti; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    In this paper we present a technique for fusion of optical and thermal face images based on image pixel fusion approach. Out of several factors, which affect face recognition performance in case of visual images, illumination changes are a significant factor that needs to be addressed. Thermal images are better in handling illumination conditions but not very consistent in capturing texture details of the faces. Other factors like sunglasses, beard, moustache etc also play active role in adding complicacies to the recognition process. Fusion of thermal and visual images is a solution to overcome the drawbacks present in the individual thermal and visual face images. Here fused images are projected into an eigenspace and the projected images are classified using a radial basis function (RBF) neural network and also by a multi-layer perceptron (MLP). In the experiments Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database benchmark for thermal and visual face images have been used. Compar...

  11. Emotional signals from faces, bodies and scenes influence observers' face expressions, fixations and pupil-size.

    Science.gov (United States)

    Kret, Mariska E; Roelofs, Karin; Stekelenburg, Jeroen J; de Gelder, Beatrice

    2013-01-01

    We receive emotional signals from different sources, including the face, the whole body, and the natural scene. Previous research has shown the importance of context provided by the whole body and the scene on the recognition of facial expressions. This study measured physiological responses to face-body-scene combinations. Participants freely viewed emotionally congruent and incongruent face-body and body-scene pairs whilst eye fixations, pupil-size, and electromyography (EMG) responses were recorded. Participants attended more to angry and fearful vs. happy or neutral cues, independent of the source and relatively independent from whether the face body and body scene combinations were emotionally congruent or not. Moreover, angry faces combined with angry bodies and angry bodies viewed in aggressive social scenes elicited greatest pupil dilation. Participants' face expressions matched the valence of the stimuli but when face-body compounds were shown, the observed facial expression influenced EMG responses more than the posture. Together, our results show that the perception of emotional signals from faces, bodies and scenes depends on the natural context, but when threatening cues are presented, these threats attract attention, induce arousal, and evoke congruent facial reactions.

  12. 3D face modeling, analysis and recognition

    CERN Document Server

    Daoudi, Mohamed; Veltkamp, Remco

    2013-01-01

    3D Face Modeling, Analysis and Recognition presents methodologies for analyzing shapes of facial surfaces, develops computational tools for analyzing 3D face data, and illustrates them using state-of-the-art applications. The methodologies chosen are based on efficient representations, metrics, comparisons, and classifications of features that are especially relevant in the context of 3D measurements of human faces. These frameworks have a long-term utility in face analysis, taking into account the anticipated improvements in data collection, data storage, processing speeds, and application s

  13. FaceID: A face detection and recognition system

    Energy Technology Data Exchange (ETDEWEB)

    Shah, M.B.; Rao, N.S.V.; Olman, V.; Uberbacher, E.C.; Mann, R.C.

    1996-12-31

    A face detection system that automatically locates faces in gray-level images is described. Also described is a system which matches a given face image with faces in a database. Face detection in an Image is performed by template matching using templates derived from a selected set of normalized faces. Instead of using original gray level images, vertical gradient images were calculated and used to make the system more robust against variations in lighting conditions and skin color. Faces of different sizes are detected by processing the image at several scales. Further, a coarse-to-fine strategy is used to speed up the processing, and a combination of whole face and face component templates are used to ensure low false detection rates. The input to the face recognition system is a normalized vertical gradient image of a face, which is compared against a database using a set of pretrained feedforward neural networks with a winner-take-all fuser. The training is performed by using an adaptation of the backpropagation algorithm. This system has been developed and tested using images from the FERET database and a set of images obtained from Rowley, et al and Sung and Poggio.

  14. Parallel Architecture for Face Recognition using MPI

    Directory of Open Access Journals (Sweden)

    Dalia Shouman Ibrahim

    2017-01-01

    Full Text Available The face recognition applications are widely used in different fields like security and computer vision. The recognition process should be done in real time to take fast decisions. Princi-ple Component Analysis (PCA considered as feature extraction technique and is widely used in facial recognition applications by projecting images in new face space. PCA can reduce the dimensionality of the image. However, PCA consumes a lot of processing time due to its high intensive computation nature. Hence, this paper proposes two different parallel architectures to accelerate training and testing phases of PCA algorithm by exploiting the benefits of distributed memory architecture. The experimental results show that the proposed architectures achieve linear speed-up and system scalability on different data sizes from the Facial Recognition Technology (FERET database.

  15. Emotion recognition from speech: tools and challenges

    Science.gov (United States)

    Al-Talabani, Abdulbasit; Sellahewa, Harin; Jassim, Sabah A.

    2015-05-01

    Human emotion recognition from speech is studied frequently for its importance in many applications, e.g. human-computer interaction. There is a wide diversity and non-agreement about the basic emotion or emotion-related states on one hand and about where the emotion related information lies in the speech signal on the other side. These diversities motivate our investigations into extracting Meta-features using the PCA approach, or using a non-adaptive random projection RP, which significantly reduce the large dimensional speech feature vectors that may contain a wide range of emotion related information. Subsets of Meta-features are fused to increase the performance of the recognition model that adopts the score-based LDC classifier. We shall demonstrate that our scheme outperform the state of the art results when tested on non-prompted databases or acted databases (i.e. when subjects act specific emotions while uttering a sentence). However, the huge gap between accuracy rates achieved on the different types of datasets of speech raises questions about the way emotions modulate the speech. In particular we shall argue that emotion recognition from speech should not be dealt with as a classification problem. We shall demonstrate the presence of a spectrum of different emotions in the same speech portion especially in the non-prompted data sets, which tends to be more "natural" than the acted datasets where the subjects attempt to suppress all but one emotion.

  16. Mapping correspondence between facial mimicry and emotion recognition in healthy subjects.

    Science.gov (United States)

    Ponari, Marta; Conson, Massimiliano; D'Amico, Nunzia Pina; Grossi, Dario; Trojano, Luigi

    2012-12-01

    We aimed at verifying the hypothesis that facial mimicry is causally and selectively involved in emotion recognition. For this purpose, in Experiment 1, we explored the effect of tonic contraction of muscles in upper or lower half of participants' face on their ability to recognize emotional facial expressions. We found that the "lower" manipulation specifically impaired recognition of happiness and disgust, the "upper" manipulation impaired recognition of anger, while both manipulations affected recognition of fear; recognition of surprise and sadness were not affected by either blocking manipulations. In Experiment 2, we verified whether emotion recognition is hampered by stimuli in which an upper or lower half-face showing an emotional expression is combined with a neutral half-face. We found that the neutral lower half-face interfered with recognition of happiness and disgust, whereas the neutral upper half impaired recognition of anger; recognition of fear and sadness was impaired by both manipulations, whereas recognition of surprise was not affected by either manipulation. Taken together, the present findings support simulation models of emotion recognition and provide insight into the role of mimicry in comprehension of others' emotional facial expressions.

  17. Biased recognition of positive faces in aging and amnestic mild cognitive impairment.

    Science.gov (United States)

    Werheid, Katja; Gruno, Maria; Kathmann, Norbert; Fischer, Håkan; Almkvist, Ove; Winblad, Bengt

    2010-03-01

    We investigated age differences in biased recognition of happy, neutral, or angry faces in 4 experiments. Experiment 1 revealed increased true and false recognition for happy faces in older adults, which persisted even when changing each face's emotional expression from study to test in Experiment 2. In Experiment 3, we examined the influence of reduced memory capacity on the positivity-induced recognition bias, which showed the absence of emotion-induced memory enhancement but a preserved recognition bias for positive faces in patients with amnestic mild cognitive impairment compared with older adults with normal memory performance. In Experiment 4, we used semantic differentials to measure the connotations of happy and angry faces. Younger and older participants regarded happy faces as more familiar than angry faces, but the older group showed a larger recognition bias for happy faces. This finding indicates that older adults use a gist-based memory strategy based on a semantic association between positive emotion and familiarity. Moreover, older adults' judgments of valence were more positive for both angry and happy faces, supporting the hypothesis of socioemotional selectivity. We propose that the positivity-induced recognition bias might be based on fluency, which in turn is based on both positivity-oriented emotional goals and on preexisting semantic associations.

  18. Enhancing face recognition by image warping

    OpenAIRE

    García Bueno, Jorge

    2009-01-01

    This project has been developed as an improvement which could be added to the actual computer vision algorithms. It is based on the original idea proposed and published by Rob Jenkins and Mike Burton about the power of the face averages in arti cial recognition. The present project aims to create a new automated procedure applied for face recognition working with average images. Up to now, this algorithm has been used manually. With this study, the averaging and warping process will be done b...

  19. Instrumental music influences recognition of emotional body language.

    Science.gov (United States)

    Van den Stock, Jan; Peretz, Isabelle; Grèzes, Julie; de Gelder, Beatrice

    2009-05-01

    In everyday life, emotional events are perceived by multiple sensory systems. Research has shown that recognition of emotions in one modality is biased towards the emotion expressed in a simultaneously presented but task irrelevant modality. In the present study, we combine visual and auditory stimuli that convey similar affective meaning but have a low probability of co-occurrence in everyday life. Dynamic face-blurred whole body expressions of a person grasping an object while expressing happiness or sadness are presented in combination with fragments of happy or sad instrumental classical music. Participants were instructed to categorize the emotion expressed by the visual stimulus. The results show that recognition of body language is influenced by the auditory stimuli. These findings indicate that crossmodal influences as previously observed for audiovisual speech can also be obtained from the ignored auditory to the attended visual modality in audiovisual stimuli that consist of whole bodies and music.

  20. Bimodal Emotion Recognition from Speech and Text

    Directory of Open Access Journals (Sweden)

    Weilin Ye

    2014-01-01

    Full Text Available This paper presents an approach to emotion recognition from speech signals and textual content. In the analysis of speech signals, thirty-seven acoustic features are extracted from the speech input. Two different classifiers Support Vector Machines (SVMs and BP neural network are adopted to classify the emotional states. In text analysis, we use the two-step classification method to recognize the emotional states. The final emotional state is determined based on the emotion outputs from the acoustic and textual analyses. In this paper we have two parallel classifiers for acoustic information and two serial classifiers for textual information, and a final decision is made by combing these classifiers in decision level fusion. Experimental results show that the emotion recognition accuracy of the integrated system is better than that of either of the two individual approaches.

  1. Emotion Recognition following Pediatric Traumatic Brain Injury: Longitudinal Analysis of Emotional Prosody and Facial Emotion Recognition

    Science.gov (United States)

    Schmidt, Adam T.; Hanten, Gerri R.; Li, Xiaoqi; Orsten, Kimberley D.; Levin, Harvey S.

    2010-01-01

    Children with closed head injuries often experience significant and persistent disruptions in their social and behavioral functioning. Studies with adults sustaining a traumatic brain injury (TBI) indicate deficits in emotion recognition and suggest that these difficulties may underlie some of the social deficits. The goal of the current study was…

  2. Music Education Intervention Improves Vocal Emotion Recognition

    Science.gov (United States)

    Mualem, Orit; Lavidor, Michal

    2015-01-01

    The current study is an interdisciplinary examination of the interplay among music, language, and emotions. It consisted of two experiments designed to investigate the relationship between musical abilities and vocal emotional recognition. In experiment 1 (N = 24), we compared the influence of two short-term intervention programs--music and…

  3. Music Education Intervention Improves Vocal Emotion Recognition

    Science.gov (United States)

    Mualem, Orit; Lavidor, Michal

    2015-01-01

    The current study is an interdisciplinary examination of the interplay among music, language, and emotions. It consisted of two experiments designed to investigate the relationship between musical abilities and vocal emotional recognition. In experiment 1 (N = 24), we compared the influence of two short-term intervention programs--music and…

  4. Influence of motion on face recognition.

    Science.gov (United States)

    Bonfiglio, Natale S; Manfredi, Valentina; Pessa, Eliano

    2012-02-01

    The influence of motion information and temporal associations on recognition of non-familiar faces was investigated using two groups which performed a face recognition task. One group was presented with regular temporal sequences of face views designed to produce the impression of motion of the face rotating in depth, the other group with random sequences of the same views. In one condition, participants viewed the sequences of the views in rapid succession with a negligible interstimulus interval (ISI). This condition was characterized by three different presentation times. In another condition, participants were presented a sequence with a 1-sec. ISI among the views. That regular sequences of views with a negligible ISI and a shorter presentation time were hypothesized to give rise to better recognition, related to a stronger impression of face rotation. Analysis of data from 45 participants showed a shorter presentation time was associated with significantly better accuracy on the recognition task; however, differences between performances associated with regular and random sequences were not significant.

  5. Probabilistic recognition of human faces from video

    DEFF Research Database (Denmark)

    Zhou, Saohua; Krüger, Volker; Chellappa, Rama

    2003-01-01

    Recognition of human faces using a gallery of still or video images and a probe set of videos is systematically investigated using a probabilistic framework. In still-to-video recognition, where the gallery consists of still images, a time series state space model is proposed to fuse temporal...... of the identity variable produces the recognition result. The model formulation is very general and it allows a variety of image representations and transformations. Experimental results using videos collected by NIST/USF and CMU illustrate the effectiveness of this approach for both still-to-video and video...... demonstrate that, due to the propagation of the identity variable over time, a degeneracy in posterior probability of the identity variable is achieved to give improved recognition. The gallery is generalized to videos in order to realize video-to-video recognition. An exemplar-based learning strategy...

  6. How is this child feeling? Preschool-aged children’s ability to recognize emotion in faces and body poses

    OpenAIRE

    Parker, Alison E.; Mathis, Erin T.; Kupersmidt, Janis B.

    2013-01-01

    The study examined children’s recognition of emotion from faces and body poses, as well as gender differences in these recognition abilities. Preschool-aged children (N = 55) and their parents and teachers participated in the study. Preschool-aged children completed a web-based measure of emotion recognition skills, which included five tasks (three with faces and two with bodies). Parents and teachers reported on children’s aggressive behaviors and social skills. Children’s emotion accuracy o...

  7. Face Behavior Recognition Through Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Haval A. Ahmed

    2016-01-01

    Full Text Available Communication between computers and humans has grown to be a major field of research. Facial Behavior Recognition through computer algorithms is a motivating and difficult field of research for establishing emotional interactions between humans and computers. Although researchers have suggested numerous methods of emotion recognition within the literature of this field, as yet, these research works have mainly focused on one method for their system output i.e. used one facial database for assessing their works. This may diminish the generalization method and additionally it might shrink the comparability range. A proposed technique for recognizing emotional expressions that are expressed through facial aspects of still images is presented. This technique uses the Support Vector Machines (SVM as a classifier of emotions. Substantive problems are considered such as diversity in facial databases, the samples included in each database, the number of facial expressions experienced an accurate method of extracting facial features, and the variety of structural models. After many experiments and the results of different models being compared, it is determined that this approach produces high recognition rates.

  8. Novel acoustic features for speech emotion recognition

    Institute of Scientific and Technical Information of China (English)

    ROH; Yong-Wan; KIM; Dong-Ju; LEE; Woo-Seok; HONG; Kwang-Seok

    2009-01-01

    This paper focuses on acoustic features that effectively improve the recognition of emotion in human speech.The novel features in this paper are based on spectral-based entropy parameters such as fast Fourier transform(FFT) spectral entropy,delta FFT spectral entropy,Mel-frequency filter bank(MFB) spectral entropy,and Delta MFB spectral entropy.Spectral-based entropy features are simple.They reflect frequency characteristic and changing characteristic in frequency of speech.We implement an emotion rejection module using the probability distribution of recognized-scores and rejected-scores.This reduces the false recognition rate to improve overall performance.Recognized-scores and rejected-scores refer to probabilities of recognized and rejected emotion recognition results,respectively.These scores are first obtained from a pattern recognition procedure.The pattern recognition phase uses the Gaussian mixture model(GMM).We classify the four emotional states as anger,sadness,happiness and neutrality.The proposed method is evaluated using 45 sentences in each emotion for 30 subjects,15 males and 15 females.Experimental results show that the proposed method is superior to the existing emotion recognition methods based on GMM using energy,Zero Crossing Rate(ZCR),linear prediction coefficient(LPC),and pitch parameters.We demonstrate the effectiveness of the proposed approach.One of the proposed features,combined MFB and delta MFB spectral entropy improves performance approximately 10% compared to the existing feature parameters for speech emotion recognition methods.We demonstrate a 4% performance improvement in the applied emotion rejection with low confidence score.

  9. Quest Hierarchy for Hyperspectral Face Recognition

    Science.gov (United States)

    2011-03-01

    IEEE, 363-366 (2002). [40] Moghaddam, Baback and Ming-Hsuan Yang , “Learning Gender with Support Faces,” IEEE Transactions on Pattern Analysis and...64] Luo, Jun , Yong Ma, Erina Takikawa, Shihong Lao, Masato Kawade, Bao-Liang Lu, "Person-Specific SIFT Features for Face Recognition," IEEE...http://www.airforcetimes.com/news/2009/02/airforce_WAAS_021609/ [117] Capaccio, Tony , “Boeing Co. Drones Play Pivotal Role In War On Taliban, Al

  10. Emotion recognition and oxytocin in patients with schizophrenia

    Science.gov (United States)

    Averbeck, B. B.; Bobin, T.; Evans, S.; Shergill, S. S.

    2012-01-01

    Background Studies have suggested that patients with schizophrenia are impaired at recognizing emotions. Recently, it has been shown that the neuropeptide oxytocin can have beneficial effects on social behaviors. Method To examine emotion recognition deficits in patients and see whether oxytocin could improve these deficits, we carried out two experiments. In the first experiment we recruited 30 patients with schizophrenia and 29 age- and IQ-matched control subjects, and gave them an emotion recognition task. Following this, we carried out a second experiment in which we recruited 21 patients with schizophrenia for a double-blind, placebo-controlled cross-over study of the effects of oxytocin on the same emotion recognition task. Results In the first experiment we found that patients with schizophrenia had a deficit relative to controls in recognizing emotions. In the second experiment we found that administration of oxytocin improved the ability of patients to recognize emotions. The improvement was consistent and occurred for most emotions, and was present whether patients were identifying morphed or non-morphed faces. Conclusions These data add to a growing literature showing beneficial effects of oxytocin on social–behavioral tasks, as well as clinical symptoms. PMID:21835090

  11. What the Face and Body Reveal: In-Group Emotion Effects and Stereotyping of Emotion in African American and European American Children

    Science.gov (United States)

    Tuminello, Elizabeth R.; Davidson, Denise

    2011-01-01

    This study examined whether 3- to 7-year-old African American and European American children's assessment of emotion in face-only, face + body, and body-only photographic stimuli was affected by in-group emotion recognition effects and racial or gender stereotyping of emotion. Evidence for racial in-group effects was found, with European American…

  12. Face Detection and Face Recognition in Android Mobile Applications

    Directory of Open Access Journals (Sweden)

    Octavian DOSPINESCU

    2016-01-01

    Full Text Available The quality of the smartphone’s camera enables us to capture high quality pictures at a high resolution, so we can perform different types of recognition on these images. Face detection is one of these types of recognition that is very common in our society. We use it every day on Facebook to tag friends in our pictures. It is also used in video games alongside Kinect concept, or in security to allow the access to private places only to authorized persons. These are just some examples of using facial recognition, because in modern society, detection and facial recognition tend to surround us everywhere. The aim of this article is to create an appli-cation for smartphones that can recognize human faces. The main goal of this application is to grant access to certain areas or rooms only to certain authorized persons. For example, we can speak here of hospitals or educational institutions where there are rooms where only certain employees can enter. Of course, this type of application can cover a wide range of uses, such as helping people suffering from Alzheimer's to recognize the people they loved, to fill gaps persons who can’t remember the names of their relatives or for example to automatically capture the face of our own children when they smile.

  13. Face Recognition Using Local and Global Features

    Directory of Open Access Journals (Sweden)

    Jian Huang

    2004-04-01

    Full Text Available The combining classifier approach has proved to be a proper way for improving recognition performance in the last two decades. This paper proposes to combine local and global facial features for face recognition. In particular, this paper addresses three issues in combining classifiers, namely, the normalization of the classifier output, selection of classifier(s for recognition, and the weighting of each classifier. For the first issue, as the scales of each classifier's output are different, this paper proposes two methods, namely, linear-exponential normalization method and distribution-weighted Gaussian normalization method, in normalizing the outputs. Second, although combining different classifiers can improve the performance, we found that some classifiers are redundant and may even degrade the recognition performance. Along this direction, we develop a simple but effective algorithm for classifiers selection. Finally, the existing methods assume that each classifier is equally weighted. This paper suggests a weighted combination of classifiers based on Kittler's combining classifier framework. Four popular face recognition methods, namely, eigenface, spectroface, independent component analysis (ICA, and Gabor jet are selected for combination and three popular face databases, namely, Yale database, Olivetti Research Laboratory (ORL database, and the FERET database, are selected for evaluation. The experimental results show that the proposed method has 5–7% accuracy improvement.

  14. Incremental Supervised Subspace Learning for Face Recognition

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Subspace learning algorithms have been well studied in face recognition. Among them, linear discriminant analysis (LDA) is one of the most widely used supervised subspace learning method. Due to the difficulty of designing an incremental solution of the eigen decomposition on the product of matrices, there is little work for computing LDA incrementally. To avoid this limitation, an incremental supervised subspace learning (ISSL) algorithm was proposed, which incrementally learns an adaptive subspace by optimizing the maximum margin criterion (MMC). With the dynamically added face images, ISSL can effectively constrain the computational cost. Feasibility of the new algorithm has been successfully tested on different face data sets.

  15. Wavelet-based multispectral face recognition

    Institute of Scientific and Technical Information of China (English)

    LIU Dian-ting; ZHOU Xiao-dan; WANG Cheng-wen

    2008-01-01

    This paper proposes a novel wavelet-based face recognition method using thermal infrared (1R) and visible-light face images. The method applies the combination of Gabor and the Fisherfaces method to the reconstructed IR and visible images derived from wavelet frequency subbands. Our objective is to search for the subbands that are insensitive to the variation in expression and in illumination. The classification performance is improved by combining the multispectal information coming from the subbands that attain individually low equal error rate. Experimental results on Notre Dame face database show that the proposed wavelet-based algorithm outperforms previous multispectral images fusion method as well as monospectral method.

  16. Facing mixed emotions: Analytic and holistic perception of facial emotion expressions engages separate brain networks.

    Science.gov (United States)

    Meaux, Emilie; Vuilleumier, Patrik

    2016-11-01

    The ability to decode facial emotions is of primary importance for human social interactions; yet, it is still debated how we analyze faces to determine their expression. Here we compared the processing of emotional face expressions through holistic integration and/or local analysis of visual features, and determined which brain systems mediate these distinct processes. Behavioral, physiological, and brain responses to happy and angry faces were assessed by presenting congruent global configurations of expressions (e.g., happy top+happy bottom), incongruent composite configurations (e.g., angry top+happy bottom), and isolated features (e.g. happy top only). Top and bottom parts were always from the same individual. Twenty-six healthy volunteers were scanned using fMRI while they classified the expression in either the top or the bottom face part but ignored information in the other non-target part. Results indicate that the recognition of happy and anger expressions is neither strictly holistic nor analytic Both routes were involved, but with a different role for analytic and holistic information depending on the emotion type, and different weights of local features between happy and anger expressions. Dissociable neural pathways were engaged depending on emotional face configurations. In particular, regions within the face processing network differed in their sensitivity to holistic expression information, which predominantly activated fusiform, inferior occipital areas and amygdala when internal features were congruent (i.e. template matching), whereas more local analysis of independent features preferentially engaged STS and prefrontal areas (IFG/OFC) in the context of full face configurations, but early visual areas and pulvinar when seen in isolated parts. Collectively, these findings suggest that facial emotion recognition recruits separate, but interactive dorsal and ventral routes within the face processing networks, whose engagement may be shaped by

  17. Aging and emotional expressions: is there a positivity bias during dynamic emotion recognition?

    Directory of Open Access Journals (Sweden)

    Alberto eDi Domenico

    2015-08-01

    Full Text Available In this study, we investigated whether age-related differences in emotion regulation priorities influence online dynamic emotional facial discrimination. A group of 40 younger and a group of 40 older adults were invited to recognize a positive or negative expression as soon as the expression slowly emerged and subsequently rate it in terms of intensity. Our findings show that older adults recognized happy expressions faster than angry ones, while the direction of emotional expression does not seem to affect younger adults’ performance. Furthermore, older adults rated both negative and positive emotional faces as more intense compared to younger controls. This study detects age-related differences with a dynamic online paradigm and suggests that different regulation strategies may shape emotional face recognition.

  18. Sleep Deprivation Impairs the Accurate Recognition of Human Emotions

    Science.gov (United States)

    van der Helm, Els; Gujar, Ninad; Walker, Matthew P.

    2010-01-01

    Study Objectives: Investigate the impact of sleep deprivation on the ability to recognize the intensity of human facial emotions. Design: Randomized total sleep-deprivation or sleep-rested conditions, involving between-group and within-group repeated measures analysis. Setting: Experimental laboratory study. Participants: Thirty-seven healthy participants, (21 females) aged 18–25 y, were randomly assigned to the sleep control (SC: n = 17) or total sleep deprivation group (TSD: n = 20). Interventions: Participants performed an emotional face recognition task, in which they evaluated 3 different affective face categories: Sad, Happy, and Angry, each ranging in a gradient from neutral to increasingly emotional. In the TSD group, the task was performed once under conditions of sleep deprivation, and twice under sleep-rested conditions following different durations of sleep recovery. In the SC group, the task was performed twice under sleep-rested conditions, controlling for repeatability. Measurements and Results: In the TSD group, when sleep-deprived, there was a marked and significant blunting in the recognition of Angry and Happy affective expressions in the moderate (but not extreme) emotional intensity range; differences that were most reliable and significant in female participants. No change in the recognition of Sad expressions was observed. These recognition deficits were, however, ameliorated following one night of recovery sleep. No changes in task performance were observed in the SC group. Conclusions: Sleep deprivation selectively impairs the accurate judgment of human facial emotions, especially threat relevant (Anger) and reward relevant (Happy) categories, an effect observed most significantly in females. Such findings suggest that sleep loss impairs discrete affective neural systems, disrupting the identification of salient affective social cues. Citation: van der Helm E; Gujar N; Walker MP. Sleep deprivation impairs the accurate recognition of human

  19. How is This Child Feeling? Preschool-Aged Children's Ability to Recognize Emotion in Faces and Body Poses

    Science.gov (United States)

    Parker, Alison E.; Mathis, Erin T.; Kupersmidt, Janis B.

    2013-01-01

    Research Findings: The study examined children's recognition of emotion from faces and body poses, as well as gender differences in these recognition abilities. Preschool-aged children ("N" = 55) and their parents and teachers participated in the study. Preschool-aged children completed a web-based measure of emotion recognition skills that…

  20. SPECTRAL METHODS IN POLISH EMOTIONAL SPEECH RECOGNITION

    Directory of Open Access Journals (Sweden)

    Paweł Powroźnik

    2016-12-01

    Full Text Available In this article the issue of emotion recognition based on Polish emotional speech signal analysis was presented. The Polish database of emotional speech, prepared and shared by the Medical Electronics Division of the Lodz University of Technology, has been used for research. Speech signal has been processed by Artificial Neural Networks (ANN. The inputs for ANN were information obtained from signal spectrogram. Researches were conducted for three different spectrogram divisions. The ANN consists of four layers but the number of neurons in each layer depends of spectrogram division. Conducted researches focused on six emotional states: a neutral state, sadness, joy, anger, fear and boredom. The averange effectiveness of emotions recognition was about 80%.

  1. A connectionist computational method for face recognition

    Directory of Open Access Journals (Sweden)

    Pujol Francisco A.

    2016-06-01

    Full Text Available In this work, a modified version of the elastic bunch graph matching (EBGM algorithm for face recognition is introduced. First, faces are detected by using a fuzzy skin detector based on the RGB color space. Then, the fiducial points for the facial graph are extracted automatically by adjusting a grid of points to the result of an edge detector. After that, the position of the nodes, their relation with their neighbors and their Gabor jets are calculated in order to obtain the feature vector defining each face. A self-organizing map (SOM framework is shown afterwards. Thus, the calculation of the winning neuron and the recognition process are performed by using a similarity function that takes into account both the geometric and texture information of the facial graph. The set of experiments carried out for our SOM-EBGM method shows the accuracy of our proposal when compared with other state-of the-art methods.

  2. Face recognition with L1-norm subspaces

    Science.gov (United States)

    Maritato, Federica; Liu, Ying; Colonnese, Stefania; Pados, Dimitris A.

    2016-05-01

    We consider the problem of representing individual faces by maximum L1-norm projection subspaces calculated from available face-image ensembles. In contrast to conventional L2-norm subspaces, L1-norm subspaces are seen to offer significant robustness to image variations, disturbances, and rank selection. Face recognition becomes then the problem of associating a new unknown face image to the "closest," in some sense, L1 subspace in the database. In this work, we also introduce the concept of adaptively allocating the available number of principal components to different face image classes, subject to a given total number/budget of principal components. Experimental studies included in this paper illustrate and support the theoretical developments.

  3. FACE RECOGNITION USING TWO DIMENSIONAL LAPLACIAN EIGENMAP

    Institute of Scientific and Technical Information of China (English)

    Chen Jiangfeng; Yuan Baozong; Pei Bingnan

    2008-01-01

    Recently,some research efforts have shown that face images possibly reside on a nonlinear sub-manifold. Though Laplacianfaces method considered the manifold structures of the face images,it has limits to solve face recognition problem. This paper proposes a new feature extraction method,Two Dimensional Laplacian EigenMap (2DLEM),which especially considers the manifold structures of the face images,and extracts the proper features from face image matrix directly by using a linear transformation. As opposed to Laplacianfaces,2DLEM extracts features directly from 2D images without a vectorization preprocessing. To test 2DLEM and evaluate its performance,a series of ex-periments are performed on the ORL database and the Yale database. Moreover,several experiments are performed to compare the performance of three 2D methods. The experiments show that 2DLEM achieves the best performance.

  4. Speech Emotion Recognition Using Fuzzy Logic Classifier

    Directory of Open Access Journals (Sweden)

    Daniar aghsanavard

    2016-01-01

    Full Text Available Over the last two decades, emotions, speech recognition and signal processing have been one of the most significant issues in the adoption of techniques to detect them. Each method has advantages and disadvantages. This paper tries to suggest fuzzy speech emotion recognition based on the classification of speech's signals in order to better recognition along with a higher speed. In this system, the use of fuzzy logic system with 5 layers, which is the combination of neural progressive network and algorithm optimization of firefly, first, speech samples have been given to input of fuzzy orbit and then, signals will be investigated and primary classified in a fuzzy framework. In this model, a pattern of signals will be created for each class of signals, which results in reduction of signal data dimension as well as easier speech recognition. The obtained experimental results show that our proposed method (categorized by firefly, improves recognition of utterances.

  5. Emotion Recognition in Animated Compared to Human Stimuli in Adolescents with Autism Spectrum Disorder

    Science.gov (United States)

    Brosnan, Mark; Johnson, Hilary; Grawmeyer, Beate; Chapman, Emma; Benton, Laura

    2015-01-01

    There is equivocal evidence as to whether there is a deficit in recognising emotional expressions in Autism spectrum disorder (ASD). This study compared emotion recognition in ASD in three types of emotion expression media (still image, dynamic image, auditory) across human stimuli (e.g. photo of a human face) and animated stimuli (e.g. cartoon…

  6. Emotional cues during simultaneous face and voice processing: electrophysiological insights.

    Directory of Open Access Journals (Sweden)

    Taosheng Liu

    Full Text Available Both facial expression and tone of voice represent key signals of emotional communication but their brain processing correlates remain unclear. Accordingly, we constructed a novel implicit emotion recognition task consisting of simultaneously presented human faces and voices with neutral, happy, and angry valence, within the context of recognizing monkey faces and voices task. To investigate the temporal unfolding of the processing of affective information from human face-voice pairings, we recorded event-related potentials (ERPs to these audiovisual test stimuli in 18 normal healthy subjects; N100, P200, N250, P300 components were observed at electrodes in the frontal-central region, while P100, N170, P270 were observed at electrodes in the parietal-occipital region. Results indicated a significant audiovisual stimulus effect on the amplitudes and latencies of components in frontal-central (P200, P300, and N250 but not the parietal occipital region (P100, N170 and P270. Specifically, P200 and P300 amplitudes were more positive for emotional relative to neutral audiovisual stimuli, irrespective of valence, whereas N250 amplitude was more negative for neutral relative to emotional stimuli. No differentiation was observed between angry and happy conditions. The results suggest that the general effect of emotion on audiovisual processing can emerge as early as 200 msec (P200 peak latency post stimulus onset, in spite of implicit affective processing task demands, and that such effect is mainly distributed in the frontal-central region.

  7. Emotional cues during simultaneous face and voice processing: electrophysiological insights.

    Science.gov (United States)

    Liu, Taosheng; Pinheiro, Ana; Zhao, Zhongxin; Nestor, Paul G; McCarley, Robert W; Niznikiewicz, Margaret A

    2012-01-01

    Both facial expression and tone of voice represent key signals of emotional communication but their brain processing correlates remain unclear. Accordingly, we constructed a novel implicit emotion recognition task consisting of simultaneously presented human faces and voices with neutral, happy, and angry valence, within the context of recognizing monkey faces and voices task. To investigate the temporal unfolding of the processing of affective information from human face-voice pairings, we recorded event-related potentials (ERPs) to these audiovisual test stimuli in 18 normal healthy subjects; N100, P200, N250, P300 components were observed at electrodes in the frontal-central region, while P100, N170, P270 were observed at electrodes in the parietal-occipital region. Results indicated a significant audiovisual stimulus effect on the amplitudes and latencies of components in frontal-central (P200, P300, and N250) but not the parietal occipital region (P100, N170 and P270). Specifically, P200 and P300 amplitudes were more positive for emotional relative to neutral audiovisual stimuli, irrespective of valence, whereas N250 amplitude was more negative for neutral relative to emotional stimuli. No differentiation was observed between angry and happy conditions. The results suggest that the general effect of emotion on audiovisual processing can emerge as early as 200 msec (P200 peak latency) post stimulus onset, in spite of implicit affective processing task demands, and that such effect is mainly distributed in the frontal-central region.

  8. Effects of exposure to facial expression variation in face learning and recognition.

    Science.gov (United States)

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.

  9. Facial expression recognition and emotional regulation in narcolepsy with cataplexy.

    Science.gov (United States)

    Bayard, Sophie; Croisier Langenier, Muriel; Dauvilliers, Yves

    2013-04-01

    Cataplexy is pathognomonic of narcolepsy with cataplexy, and defined by a transient loss of muscle tone triggered by strong emotions. Recent researches suggest abnormal amygdala function in narcolepsy with cataplexy. Emotion treatment and emotional regulation strategies are complex functions involving cortical and limbic structures, like the amygdala. As the amygdala has been shown to play a role in facial emotion recognition, we tested the hypothesis that patients with narcolepsy with cataplexy would have impaired recognition of facial emotional expressions compared with patients affected with central hypersomnia without cataplexy and healthy controls. We also aimed to determine whether cataplexy modulates emotional regulation strategies. Emotional intensity, arousal and valence ratings on Ekman faces displaying happiness, surprise, fear, anger, disgust, sadness and neutral expressions of 21 drug-free patients with narcolepsy with cataplexy were compared with 23 drug-free sex-, age- and intellectual level-matched adult patients with hypersomnia without cataplexy and 21 healthy controls. All participants underwent polysomnography recording and multiple sleep latency tests, and completed depression, anxiety and emotional regulation questionnaires. Performance of patients with narcolepsy with cataplexy did not differ from patients with hypersomnia without cataplexy or healthy controls on both intensity rating of each emotion on its prototypical label and mean ratings for valence and arousal. Moreover, patients with narcolepsy with cataplexy did not use different emotional regulation strategies. The level of depressive and anxious symptoms in narcolepsy with cataplexy did not differ from the other groups. Our results demonstrate that narcolepsy with cataplexy accurately perceives and discriminates facial emotions, and regulates emotions normally. The absence of alteration of perceived affective valence remains a major clinical interest in narcolepsy with cataplexy

  10. Sex-Related Differences in Emotion Recognition in Multi-concussed Athletes.

    Science.gov (United States)

    Léveillé, Edith; Guay, Samuel; Blais, Caroline; Scherzer, Peter; De Beaumont, Louis

    2017-01-01

    Concussion is defined as a complex pathophysiological process affecting the brain. Although the cumulative and long-term effects of multiple concussions are now well documented on cognitive and motor function, little is known about their effects on emotion recognition. Recent studies have suggested that concussion can result in emotional sequelae, particularly in females and multi-concussed athletes. The objective of this study was to investigate sex-related differences in emotion recognition in asymptomatic male and female multi-concussed athletes. We tested 28 control athletes (15 males) and 22 multi-concussed athletes (10 males) more than a year since the last concussion. Participants completed the Post-Concussion Symptom Scale, the Beck Depression Inventory-II, the Beck Anxiety Inventory, a neuropsychological test battery and a morphed emotion recognition task. Pictures of a male face expressing basic emotions (anger, disgust, fear, happiness, sadness, surprise) morphed with another emotion were randomly presented. After each face presentation, participants were asked to indicate the emotion expressed by the face. Results revealed significant sex by group interactions in accuracy and intensity threshold for negative emotions, together with significant main effects of emotion and group. Male concussed athletes were significantly impaired in recognizing negative emotions and needed more emotional intensity to correctly identify these emotions, compared to same-sex controls. In contrast, female concussed athletes performed similarly to same-sex controls. These findings suggest that sex significantly modulates concussion effects on emotional facial expression recognition. (JINS, 2017, 23, 65-77).

  11. The development of emotion recognition from facial expressions and non-linguistic vocalizations during childhood.

    Science.gov (United States)

    Chronaki, Georgia; Hadwin, Julie A; Garner, Matthew; Maurage, Pierre; Sonuga-Barke, Edmund J S

    2015-06-01

    Sensitivity to facial and vocal emotion is fundamental to children's social competence. Previous research has focused on children's facial emotion recognition, and few studies have investigated non-linguistic vocal emotion processing in childhood. We compared facial and vocal emotion recognition and processing biases in 4- to 11-year-olds and adults. Eighty-eight 4- to 11-year-olds and 21 adults participated. Participants viewed/listened to faces and voices (angry, happy, and sad) at three intensity levels (50%, 75%, and 100%). Non-linguistic tones were used. For each modality, participants completed an emotion identification task. Accuracy and bias for each emotion and modality were compared across 4- to 5-, 6- to 9- and 10- to 11-year-olds and adults. The results showed that children's emotion recognition improved with age; preschoolers were less accurate than other groups. Facial emotion recognition reached adult levels by 11 years, whereas vocal emotion recognition continued to develop in late childhood. Response bias decreased with age. For both modalities, sadness recognition was delayed across development relative to anger and happiness. The results demonstrate that developmental trajectories of emotion processing differ as a function of emotion type and stimulus modality. In addition, vocal emotion processing showed a more protracted developmental trajectory, compared to facial emotion processing. The results have important implications for programmes aiming to improve children's socio-emotional competence.

  12. Multimodal approaches for emotion recognition: a survey

    Science.gov (United States)

    Sebe, Nicu; Cohen, Ira; Gevers, Theo; Huang, Thomas S.

    2005-01-01

    Recent technological advances have enabled human users to interact with computers in ways previously unimaginable. Beyond the confines of the keyboard and mouse, new modalities for human-computer interaction such as voice, gesture, and force-feedback are emerging. Despite important advances, one necessary ingredient for natural interaction is still missing-emotions. Emotions play an important role in human-to-human communication and interaction, allowing people to express themselves beyond the verbal domain. The ability to understand human emotions is desirable for the computer in several applications. This paper explores new ways of human-computer interaction that enable the computer to be more aware of the user's emotional and attentional expressions. We present the basic research in the field and the recent advances into the emotion recognition from facial, voice, and physiological signals, where the different modalities are treated independently. We then describe the challenging problem of multimodal emotion recognition and we advocate the use of probabilistic graphical models when fusing the different modalities. We also discuss the difficult issues of obtaining reliable affective data, obtaining ground truth for emotion recognition, and the use of unlabeled data.

  13. Adaptive Face Recognition via Structed Representation

    Institute of Scientific and Technical Information of China (English)

    ZHANG Yu-hua; ZENG Xiao-ming

    2014-01-01

    In this paper, we propose a face recognition approach-Structed Sparse Representation-based classification when the measurement of the test sample is less than the number training samples of each subject. When this condition is not satisfied, we exploit Nearest Subspace approach to classify the test sample. In order to adapt all the cases, we combine the two approaches to an adaptive classification method-Adaptive approach. The adaptive approach yields greater recognition accuracy than the SRC approach and CRC_RLS approach with low sample rate on the Extend Yale B dataset. And it is more efficient than other two approaches.

  14. Autism and emotional face-viewing.

    Science.gov (United States)

    Åsberg Johnels, Jakob; Hovey, Daniel; Zürcher, Nicole; Hippolyte, Loyse; Lemonnier, Eric; Gillberg, Christopher; Hadjikhani, Nouchine

    2016-11-28

    Atypical patterns of face-scanning in individuals with autism spectrum disorder (ASD) may contribute to difficulties in social interactions, but there is little agreement regarding what exactly characterizes face-viewing in ASD. In addition, little research has examined how face-viewing is modulated by the emotional expression of the stimuli, in individuals with or without ASD. We used eye-tracking to explore viewing patterns during perception of dynamic emotional facial expressions in relatively large groups of individuals with (n = 57) and without ASD (n = 58) and examined diagnostic- and age-related effects, after subgrouping children and adolescents (≤18 years), on the one hand, and adults (>18 years), on the other. Results showed that children/adolescents with ASD fixated the mouth of happy and angry faces less than their typically developing (TD) peers, and conversely looked more to the eyes of happy faces. Moreover, while all groups fixated the mouth in happy faces more than in other expressions, children/adolescents with ASD did relatively less so. Correlation analysis showed a similar lack of relative orientation toward the mouth of smiling faces in TD children/adolescents with high autistic traits, as measured by the Autism-Spectrum Quotient (AQ). Among adults, participants with ASD attended less to the eyes only for neutral faces. Our study shows that the emotional content of a face influences gaze behavior, and that this effect is not fully developed in children/adolescents with ASD. Interestingly, this lack of differentiation observed in the younger ASD group was also seen in younger TD individuals with higher AQ scores. Autism Res 2016. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  15. Biologically inspired emotion recognition from speech

    Science.gov (United States)

    Caponetti, Laura; Buscicchio, Cosimo Alessandro; Castellano, Giovanna

    2011-12-01

    Emotion recognition has become a fundamental task in human-computer interaction systems. In this article, we propose an emotion recognition approach based on biologically inspired methods. Specifically, emotion classification is performed using a long short-term memory (LSTM) recurrent neural network which is able to recognize long-range dependencies between successive temporal patterns. We propose to represent data using features derived from two different models: mel-frequency cepstral coefficients (MFCC) and the Lyon cochlear model. In the experimental phase, results obtained from the LSTM network and the two different feature sets are compared, showing that features derived from the Lyon cochlear model give better recognition results in comparison with those obtained with the traditional MFCC representation.

  16. Biologically inspired emotion recognition from speech

    Directory of Open Access Journals (Sweden)

    Buscicchio Cosimo

    2011-01-01

    Full Text Available Abstract Emotion recognition has become a fundamental task in human-computer interaction systems. In this article, we propose an emotion recognition approach based on biologically inspired methods. Specifically, emotion classification is performed using a long short-term memory (LSTM recurrent neural network which is able to recognize long-range dependencies between successive temporal patterns. We propose to represent data using features derived from two different models: mel-frequency cepstral coefficients (MFCC and the Lyon cochlear model. In the experimental phase, results obtained from the LSTM network and the two different feature sets are compared, showing that features derived from the Lyon cochlear model give better recognition results in comparison with those obtained with the traditional MFCC representation.

  17. Sad and happy facial emotion recognition impairment in progressive supranuclear palsy in comparison with Parkinson's disease.

    Science.gov (United States)

    Pontieri, Francesco E; Assogna, Francesca; Stefani, Alessandro; Pierantozzi, Mariangela; Meco, Giuseppe; Benincasa, Dario; Colosimo, Carlo; Caltagirone, Carlo; Spalletta, Gianfranco

    2012-08-01

    The severity of motor and non-motor symptoms of progressive supranuclear palsy (PSP) has a profound impact on social interactions of affected individuals and may, consequently, contribute to alter emotion recognition. Here we investigated facial emotion recognition impairment in PSP with respect to Parkinson's disease (PD), with the primary aim of outlining the differences between the two disorders. Moreover, we applied an intensity-dependent paradigm to examine the different threshold of encoding emotional faces in PSP and PD. The Penn emotion recognition test (PERT) was used to assess facial emotion recognition ability in PSP and PD patients. The 2 groups were matched for age, disease duration, global cognition, depression, anxiety, and daily L-Dopa intake. PSP patients displayed significantly lower recognition of sad and happy emotional faces with respect to PD ones. This applied to global recognition, as well as to low-intensity and high-intensity facial emotion recognition. These results indicate specific impairment of recognition of sad and happy facial emotions in PSP with respect to PD patients. The differences may depend upon diverse involvement of cortical-subcortical loops integrating emotional states and cognition between the two conditions, and might represent a neuropsychological correlate of the apathetic syndrome frequently encountered in PSP.

  18. Face Recognition using Optimal Representation Ensemble

    CERN Document Server

    Li, Hanxi; Gao, Yongsheng

    2011-01-01

    Recently, the face recognizers based on linear representations have been shown to deliver state-of-the-art performance. In real-world applications, however, face images usually suffer from expressions, disguises and random occlusions. The problematic facial parts undermine the validity of the linear-subspace assumption and thus the recognition performance deteriorates significantly. In this work, we address the problem in a learning-inference-mixed fashion. By observing that the linear-subspace assumption is more reliable on certain face patches rather than on the holistic face, some Bayesian Patch Representations (BPRs) are randomly generated and interpreted according to the Bayes' theory. We then train an ensemble model over the patch-representations by minimizing the empirical risk w.r.t the "leave-one-out margins". The obtained model is termed Optimal Representation Ensemble (ORE), since it guarantees the optimality from the perspective of Empirical Risk Minimization. To handle the unknown patterns in tes...

  19. AN EVEN COMPONENT BASED FACE RECOGNITION METHOD

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    This paper presents a novel face recognition algorithm. To provide additional variations to training data set, even-odd decomposition is adopted, and only the even components (half-even face images) are used for further processing. To tackle with shift-variant problem,Fourier transform is applied to half-even face images. To reduce the dimension of an image,PCA (Principle Component Analysis) features are extracted from the amplitude spectrum of half-even face images. Finally, nearest neighbor classifier is employed for the task of classification. Experimental results on ORL database show that the proposed method outperforms in terms of accuracy the conventional eigenface method which applies PCA on original images and the eigenface method which uses both the original images and their mirror images as training set.

  20. Human Face Recognition Using Convolutional Neural Networks

    Directory of Open Access Journals (Sweden)

    Răzvan-Daniel Albu

    2009-10-01

    Full Text Available In this paper, I present a novel hybrid face recognition approach based on a convolutional neural architecture, designed to robustly detect highly variable face patterns. The convolutional network extracts successively larger features in a hierarchical set of layers. With the weights of the trained neural networks there are created kernel windows used for feature extraction in a 3-stage algorithm. I present experimental results illustrating the efficiency of the proposed approach. I use a database of 796 images of 159 individuals from Reims University which contains quite a high degree of variability in expression, pose, and facial details.

  1. Face Recognition (Patterns Matching & Bio-Metrics

    Directory of Open Access Journals (Sweden)

    Jignesh Dhirubhai Hirapara

    2012-08-01

    Full Text Available Government agencies are investing a considerable amount of resources into improving security systems as result of recent terrorist events that dangerously exposed flaws and weaknesses in today’s safety mechanisms. Badge or password-based authentication procedures are too easy to hack. Biometrics represents a valid alternative but they suffer of drawbacks as well. Iris scanning, for example, is very reliable but too intrusive; fingerprints are socially accepted, but not applicable to non-con sentient people. On the other hand, face recognition represents a good compromise between what’s socially acceptable and what’s reliable, even when operating under controlled conditions. In last decade, many algorithms based on linear/nonlinear methods, neural networks, wavelets, etc. have been proposed. Nevertheless, Face Recognition Vendor Test 2002 shown that most of these approaches encountered problems in outdoor conditions. This lowered their reliability compared to state of the art biometrics.

  2. Enhanced Face Recognition using Data Fusion

    Directory of Open Access Journals (Sweden)

    Alaa Eleyan

    2012-12-01

    Full Text Available In this paper we scrutinize the influence of fusion on the face recognition performance. In pattern recognition task, benefiting from different uncorrelated observations and performing fusion at feature and/or decision levels improves the overall performance. In features fusion approach, we fuse (concatenate the feature vectors obtained using different feature extractors for the same image. Classification is then performed using different similarity measures. In decisions fusion approach, the fusion is performed at decisions level, where decisions from different algorithms are fused using majority voting. The proposed method was tested using face images having different facial expressions and conditions obtained from ORL and FRAV2D databases. Simulations results show that the performance of both feature and decision fusion approaches outperforms the single performances of the fused algorithms significantly.

  3. I feel your fear: shared touch between faces facilitates recognition of fearful facial expressions.

    Science.gov (United States)

    Maister, Lara; Tsiakkas, Eleni; Tsakiris, Manos

    2013-02-01

    Embodied simulation accounts of emotion recognition claim that we vicariously activate somatosensory representations to simulate, and eventually understand, how others feel. Interestingly, mirror-touch synesthetes, who experience touch when observing others being touched, show both enhanced somatosensory simulation and superior recognition of emotional facial expressions. We employed synchronous visuotactile stimulation to experimentally induce a similar experience of "mirror touch" in nonsynesthetic participants. Seeing someone else's face being touched at the same time as one's own face results in the "enfacement illusion," which has been previously shown to blur self-other boundaries. We demonstrate that the enfacement illusion also facilitates emotion recognition, and, importantly, this facilitatory effect is specific to fearful facial expressions. Shared synchronous multisensory experiences may experimentally facilitate somatosensory simulation mechanisms involved in the recognition of fearful emotional expressions.

  4. Face Recognition Based on Nonlinear Feature Approach

    Directory of Open Access Journals (Sweden)

    Eimad E.A. Abusham

    2008-01-01

    Full Text Available Feature extraction techniques are widely used to reduce the complexity high dimensional data. Nonlinear feature extraction via Locally Linear Embedding (LLE has attracted much attention due to their high performance. In this paper, we proposed a novel approach for face recognition to address the challenging task of recognition using integration of nonlinear dimensional reduction Locally Linear Embedding integrated with Local Fisher Discriminant Analysis (LFDA to improve the discriminating power of the extracted features by maximize between-class while within-class local structure is preserved. Extensive experimentation performed on the CMU-PIE database indicates that the proposed methodology outperforms Benchmark methods such as Principal Component Analysis (PCA, Fisher Discrimination Analysis (FDA. The results showed that 95% of recognition rate could be obtained using our proposed method.

  5. Gender-Based Prototype Formation in Face Recognition

    Science.gov (United States)

    Baudouin, Jean-Yves; Brochard, Renaud

    2011-01-01

    The role of gender categories in prototype formation during face recognition was investigated in 2 experiments. The participants were asked to learn individual faces and then to recognize them. During recognition, individual faces were mixed with faces, which were blended faces of same or different genders. The results of the 2 experiments showed…

  6. Face Expression Recognition and Analysis: The State of the Art

    CERN Document Server

    Bettadapura, Vinay

    2012-01-01

    The automatic recognition of facial expressions has been an active research topic since the early nineties. There have been several advances in the past few years in terms of face detection and tracking, feature extraction mechanisms and the techniques used for expression classification. This paper surveys some of the published work since 2001 till date. The paper presents a time-line view of the advances made in this field, the applications of automatic face expression recognizers, the characteristics of an ideal system, the databases that have been used and the advances made in terms of their standardization and a detailed summary of the state of the art. The paper also discusses facial parameterization using FACS Action Units (AUs) and MPEG-4 Facial Animation Parameters (FAPs) and the recent advances in face detection, tracking and feature extraction methods. Notes have also been presented on emotions, expressions and facial features, discussion on the six prototypic expressions and the recent studies on e...

  7. Applying Artificial Neural Networks for Face Recognition

    Directory of Open Access Journals (Sweden)

    Thai Hoang Le

    2011-01-01

    Full Text Available This paper introduces some novel models for all steps of a face recognition system. In the step of face detection, we propose a hybrid model combining AdaBoost and Artificial Neural Network (ABANN to solve the process efficiently. In the next step, labeled faces detected by ABANN will be aligned by Active Shape Model and Multi Layer Perceptron. In this alignment step, we propose a new 2D local texture model based on Multi Layer Perceptron. The classifier of the model significantly improves the accuracy and the robustness of local searching on faces with expression variation and ambiguous contours. In the feature extraction step, we describe a methodology for improving the efficiency by the association of two methods: geometric feature based method and Independent Component Analysis method. In the face matching step, we apply a model combining many Neural Networks for matching geometric features of human face. The model links many Neural Networks together, so we call it Multi Artificial Neural Network. MIT + CMU database is used for evaluating our proposed methods for face detection and alignment. Finally, the experimental results of all steps on CallTech database show the feasibility of our proposed model.

  8. Is Emotion Recognition Related to Core Symptoms of Childhood ADHD?

    Science.gov (United States)

    Tehrani-Doost, Mehdi; Noorazar, Gholamreza; Shahrivar, Zahra; Banaraki, Anahita Khorrami; Beigi, Parvane Farhad; Noorian, Nahid

    2017-01-01

    Objective Children with attention deficit/hyperactivity disorder (ADHD) have some problems in social relationships which may be related to their deficit in recognizing emotional expressions. It is not clear if the deficit in emotion recognition is secondary to core symptoms of ADHD or can be considered as an independent symptom. This study aimed to evaluate the ability of detecting emotional faces and its relation to inattention and hyperactivity-impulsivity in children with ADHD compared to a typically developing (TD) group. Methods Twenty-eight boys diagnosed as having ADHD, aged from seven to 12 years old were compared to 27 TD boys using a computerized Facial Emotion Recognition Task (FERT). Conners’ Parent Rating Scale (CPRS) and Continuous Performance Test II (CPT II) were also administered to assess the severity of inattention and impulsivity. Results The percentages of angry, happy and sad faces detected by children with ADHD were significantly lower (pimpulsivity was added to the model. Conclusion It can be concluded that children with ADHD suffer from some impairments in recognizing angry, happy and sad faces. This deficit may be related to inattention and hyperactivity-impulsivity. PMID:28331501

  9. A reciprocal model of face recognition and autistic traits: evidence from an individual differences perspective.

    Science.gov (United States)

    Halliday, Drew W R; MacDonald, Stuart W S; Scherf, K Suzanne; Sherf, Suzanne K; Tanaka, James W

    2014-01-01

    Although not a core symptom of the disorder, individuals with autism often exhibit selective impairments in their face processing abilities. Importantly, the reciprocal connection between autistic traits and face perception has rarely been examined within the typically developing population. In this study, university participants from the social sciences, physical sciences, and humanities completed a battery of measures that assessed face, object and emotion recognition abilities, general perceptual-cognitive style, and sub-clinical autistic traits (the Autism Quotient (AQ)). We employed separate hierarchical multiple regression analyses to evaluate which factors could predict face recognition scores and AQ scores. Gender, object recognition performance, and AQ scores predicted face recognition behaviour. Specifically, males, individuals with more autistic traits, and those with lower object recognition scores performed more poorly on the face recognition test. Conversely, university major, gender and face recognition performance reliably predicted AQ scores. Science majors, males, and individuals with poor face recognition skills showed more autistic-like traits. These results suggest that the broader autism phenotype is associated with lower face recognition abilities, even among typically developing individuals.

  10. Face and body recognition show similar improvement during childhood.

    Science.gov (United States)

    Bank, Samantha; Rhodes, Gillian; Read, Ainsley; Jeffery, Linda

    2015-09-01

    Adults are proficient in extracting identity cues from faces. This proficiency develops slowly during childhood, with performance not reaching adult levels until adolescence. Bodies are similar to faces in that they convey identity cues and rely on specialized perceptual mechanisms. However, it is currently unclear whether body recognition mirrors the slow development of face recognition during childhood. Recent evidence suggests that body recognition develops faster than face recognition. Here we measured body and face recognition in 6- and 10-year-old children and adults to determine whether these two skills show different amounts of improvement during childhood. We found no evidence that they do. Face and body recognition showed similar improvement with age, and children, like adults, were better at recognizing faces than bodies. These results suggest that the mechanisms of face and body memory mature at a similar rate or that improvement of more general cognitive and perceptual skills underlies improvement of both face and body recognition.

  11. Semantic information can facilitate covert face recognition in congenital prosopagnosia.

    Science.gov (United States)

    Rivolta, Davide; Schmalzl, Laura; Coltheart, Max; Palermo, Romina

    2010-11-01

    People with congenital prosopagnosia have never developed the ability to accurately recognize faces. This single case investigation systematically investigates covert and overt face recognition in "C.," a 69 year-old woman with congenital prosopagnosia. Specifically, we: (a) describe the first assessment of covert face recognition in congenital prosopagnosia using multiple tasks; (b) show that semantic information can contribute to covert recognition; and (c) provide a theoretical explanation for the mechanisms underlying covert face recognition.

  12. Varying face occlusion detection and iterative recovery for face recognition

    Science.gov (United States)

    Wang, Meng; Hu, Zhengping; Sun, Zhe; Zhao, Shuhuan; Sun, Mei

    2017-05-01

    In most sparse representation methods for face recognition (FR), occlusion problems were usually solved via removing the occlusion part of both query samples and training samples to perform the recognition process. This practice ignores the global feature of facial image and may lead to unsatisfactory results due to the limitation of local features. Considering the aforementioned drawback, we propose a method called varying occlusion detection and iterative recovery for FR. The main contributions of our method are as follows: (1) to detect an accurate occlusion area of facial images, an image processing and intersection-based clustering combination method is used for occlusion FR; (2) according to an accurate occlusion map, the new integrated facial images are recovered iteratively and put into a recognition process; and (3) the effectiveness on recognition accuracy of our method is verified by comparing it with three typical occlusion map detection methods. Experiments show that the proposed method has a highly accurate detection and recovery performance and that it outperforms several similar state-of-the-art methods against partial contiguous occlusion.

  13. Incremental Nonnegative Matrix Factorization for Face Recognition

    Directory of Open Access Journals (Sweden)

    Wen-Sheng Chen

    2008-01-01

    Full Text Available Nonnegative matrix factorization (NMF is a promising approach for local feature extraction in face recognition tasks. However, there are two major drawbacks in almost all existing NMF-based methods. One shortcoming is that the computational cost is expensive for large matrix decomposition. The other is that it must conduct repetitive learning, when the training samples or classes are updated. To overcome these two limitations, this paper proposes a novel incremental nonnegative matrix factorization (INMF for face representation and recognition. The proposed INMF approach is based on a novel constraint criterion and our previous block strategy. It thus has some good properties, such as low computational complexity, sparse coefficient matrix. Also, the coefficient column vectors between different classes are orthogonal. In particular, it can be applied to incremental learning. Two face databases, namely FERET and CMU PIE face databases, are selected for evaluation. Compared with PCA and some state-of-the-art NMF-based methods, our INMF approach gives the best performance.

  14. Human Face Recognition using Line Features

    CERN Document Server

    Bhowmik, Mrinal Kanti; Nasipuri, Mita; Basu, Dipak Kumar; Kundu, Mahantapas

    2010-01-01

    In this work we investigate a novel approach to handle the challenges of face recognition, which includes rotation, scale, occlusion, illumination etc. Here, we have used thermal face images as those are capable to minimize the affect of illumination changes and occlusion due to moustache, beards, adornments etc. The proposed approach registers the training and testing thermal face images in polar coordinate, which is capable to handle complicacies introduced by scaling and rotation. Line features are extracted from thermal polar images and feature vectors are constructed using these line. Feature vectors thus obtained passes through principal component analysis (PCA) for the dimensionality reduction of feature vectors. Finally, the images projected into eigenspace are classified using a multi-layer perceptron. In the experiments we have used Object Tracking and Classification Beyond Visible Spectrum (OTCBVS) database. Experimental results show that the proposed approach significantly improves the verificatio...

  15. Weighted Attribute Fusion Model for Face Recognition

    CERN Document Server

    Sakthivel, S

    2010-01-01

    Recognizing a face based on its attributes is an easy task for a human to perform as it is a cognitive process. In recent years, Face Recognition is achieved with different kinds of facial features which were used separately or in a combined manner. Currently, Feature fusion methods and parallel methods are the facial features used and performed by integrating multiple feature sets at different levels. However, this integration and the combinational methods do not guarantee better result. Hence to achieve better results, the feature fusion model with multiple weighted facial attribute set is selected. For this feature model, face images from predefined data set has been taken from Olivetti Research Laboratory (ORL) and applied on different methods like Principal Component Analysis (PCA) based Eigen feature extraction technique, Discrete Cosine Transformation (DCT) based feature extraction technique, Histogram Based Feature Extraction technique and Simple Intensity based features. The extracted feature set obt...

  16. Eigenvector Weighting Function in Face Recognition

    Directory of Open Access Journals (Sweden)

    Pang Ying Han

    2011-01-01

    Full Text Available Graph-based subspace learning is a class of dimensionality reduction technique in face recognition. The technique reveals the local manifold structure of face data that hidden in the image space via a linear projection. However, the real world face data may be too complex to measure due to both external imaging noises and the intra-class variations of the face images. Hence, features which are extracted by the graph-based technique could be noisy. An appropriate weight should be imposed to the data features for better data discrimination. In this paper, a piecewise weighting function, known as Eigenvector Weighting Function (EWF, is proposed and implemented in two graph based subspace learning techniques, namely Locality Preserving Projection and Neighbourhood Preserving Embedding. Specifically, the computed projection subspace of the learning approach is decomposed into three partitions: a subspace due to intra-class variations, an intrinsic face subspace, and a subspace which is attributed to imaging noises. Projected data features are weighted differently in these subspaces to emphasize the intrinsic face subspace while penalizing the other two subspaces. Experiments on FERET and FRGC databases are conducted to show the promising performance of the proposed technique.

  17. Emotion recognition a pattern analysis approach

    CERN Document Server

    Konar, Amit

    2014-01-01

    Offers both foundations and advances on emotion recognition in a single volumeProvides a thorough and insightful introduction to the subject by utilizing computational tools of diverse domainsInspires young researchers to prepare themselves for their own researchDemonstrates direction of future research through new technologies, such as Microsoft Kinect, EEG systems etc.

  18. Faces and bodies: perception and mimicry of emotionally congruent and incongruent facial and bodily expressions

    Directory of Open Access Journals (Sweden)

    Mariska eKret

    2013-02-01

    Full Text Available Traditional emotion theories stress the importance of the face in the expression of emotions but bodily expressions are becoming increasingly important. Here we tested the hypothesis that similar physiological responses can be evoked by observing emotional face and body signals and that the reaction to angry signals is amplified in anxious individuals. We designed three experiments in which participants categorized emotional expressions from isolated facial and bodily expressions and from emotionally congruent and incongruent face-body compounds. Participants’ fixations were measured and their pupil size recorded with eye-tracking equipment, and their facial reactions measured with electromyography (EMG. The behavioral results support our prediction that the recognition of a facial expression is improved in the context of a matching posture and importantly, also vice versa. From their facial expression, it appeared that observers acted with signs of negative emotionality (increased corrugator activity to angry and fearful facial expressions and with positive emotionality (increased zygomaticus to happy facial expressions. What we predicted and found, was that angry and fearful cues from the face or the body, attracted more attention than happy cues. We further observed that responses evoked by angry cues were amplified in individuals with high anxiety scores. In sum, we show that people process bodily expressions of emotion in a similar fashion as facial expressions and that the congruency between the emotional signals from the face and body ameliorates the recognition of the emotion.

  19. Optimization Methods in Emotion Recognition System

    Directory of Open Access Journals (Sweden)

    L. Povoda

    2016-09-01

    Full Text Available Emotions play big role in our everyday communication and contain important information. This work describes a novel method of automatic emotion recognition from textual data. The method is based on well-known data mining techniques, novel approach based on parallel run of SVM (Support Vector Machine classifiers, text preprocessing and 3 optimization methods: sequential elimination of attributes, parameter optimization based on token groups, and method of extending train data sets during practical testing and production release final tuning. We outperformed current state of the art methods and the results were validated on bigger data sets (3346 manually labelled samples which is less prone to overfitting when compared to related works. The accuracy achieved in this work is 86.89% for recognition of 5 emotional classes. The experiments were performed in the real world helpdesk environment, was processing Czech language but the proposed methodology is general and can be applied to many different languages.

  20. Speech emotion recognition with unsupervised feature learning

    Institute of Scientific and Technical Information of China (English)

    Zheng-wei HUANG; Wen-tao XUE; Qi-rong MAO

    2015-01-01

    Emotion-based features are critical for achieving high performance in a speech emotion recognition (SER) system. In general, it is difficult to develop these features due to the ambiguity of the ground-truth. In this paper, we apply several unsupervised feature learning algorithms (including K-means clustering, the sparse auto-encoder, and sparse restricted Boltzmann machines), which have promise for learning task-related features by using unlabeled data, to speech emotion recognition. We then evaluate the performance of the proposed approach and present a detailed analysis of the effect of two important factors in the model setup, the content window size and the number of hidden layer nodes. Experimental results show that larger content windows and more hidden nodes contribute to higher performance. We also show that the two-layer network cannot explicitly improve performance compared to a single-layer network.

  1. ECG Signal Feature Selection for Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Lichen Xun

    2013-01-01

    Full Text Available This paper aims to study the selection of features based on ECG in emotion recognition. In the process of features selection, we start from existing feature selection algorithm, and pay special attention to some of the intuitive value on ECG waveform as well. Through the use of ANOVA and heuristic search, we picked out the different features to distinguish joy and pleasure these two emotions, then we combine this with pathological analysis of ECG signals by the view of the medical experts to discuss the logic corresponding relation between ECG waveform and emotion distinguish. Through experiment, using the method in this paper we only picked out five features and reached 92% of accuracy rate in the recognition of joy and pleasure.

  2. Research on Face Recognition Based on Embedded System

    Directory of Open Access Journals (Sweden)

    Hong Zhao

    2013-01-01

    Full Text Available Because a number of image feature data to store, complex calculation to execute during the face recognition, therefore the face recognition process was realized only by PCs with high performance. In this paper, the OpenCV facial Haar-like features were used to identify face region; the Principal Component Analysis (PCA was employed in quick extraction of face features and the Euclidean Distance was also adopted in face recognition; as thus, data amount and computational complexity would be reduced effectively in face recognition, and the face recognition could be carried out on embedded platform. Finally, based on Tiny6410 embedded platform, a set of embedded face recognition systems was constructed. The test results showed that the system has stable operation and high recognition rate can be used in portable and mobile identification and authentication.

  3. Complex Wavelet Transform-Based Face Recognition

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available Complex approximately analytic wavelets provide a local multiscale description of images with good directional selectivity and invariance to shifts and in-plane rotations. Similar to Gabor wavelets, they are insensitive to illumination variations and facial expression changes. The complex wavelet transform is, however, less redundant and computationally efficient. In this paper, we first construct complex approximately analytic wavelets in the single-tree context, which possess Gabor-like characteristics. We, then, investigate the recently developed dual-tree complex wavelet transform (DT-CWT and the single-tree complex wavelet transform (ST-CWT for the face recognition problem. Extensive experiments are carried out on standard databases. The resulting complex wavelet-based feature vectors are as discriminating as the Gabor wavelet-derived features and at the same time are of lower dimension when compared with that of Gabor wavelets. In all experiments, on two well-known databases, namely, FERET and ORL databases, complex wavelets equaled or surpassed the performance of Gabor wavelets in recognition rate when equal number of orientations and scales is used. These findings indicate that complex wavelets can provide a successful alternative to Gabor wavelets for face recognition.

  4. A Fuzzy Neural Model for Face Recognition

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In this paper, a fuzzy neural model is proposed for face recognition. Each rule in the proposed fuzzy neural model is used to estimate one cluster of pattern distribution in a form, which is different from the classical possibilitydensity function. Through self-adaptive learning and fuzzy inference, a confidence value will be assigned to a given pattern to denote the possibility of this pattern's belongingness to some certain class/subject. The architecture of the whole system takes structure of one-class-in-one-network (OCON), which has many advantages such as easy convergence, suitable for distribution application, quickretrieving, and incremental training. Novel methods are used to determine the number of fuzzy rules and initialize fuzzy subsets. The proposed approach has characteristics of quick learning/recognition speed, high recognition accuracy and robustness. The proposed approach can even recognize very low-resolution face images (e.g., 7x6) well that human cannot when the number of subjects is not very large. Experiments on ORL demonstrate the effectiveness of the proposed approachand an average error rate of 3.95% is obtained.

  5. Facial Emotion Recognition and Expression in Parkinson's Disease: An Emotional Mirror Mechanism?

    Science.gov (United States)

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J; Kilner, James

    2017-01-01

    Parkinson's disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. For emotion recognition, PD reported lower score than HC for Ekman total score (pemotions sub-scores happiness, fear, anger, sadness (pemotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all pemotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). PD patients showed difficulties in recognizing emotional facial

  6. Facial Emotion Recognition and Expression in Parkinson’s Disease: An Emotional Mirror Mechanism?

    Science.gov (United States)

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J.; Kilner, James

    2017-01-01

    Background and aim Parkinson’s disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Methods Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. Results For emotion recognition, PD reported lower score than HC for Ekman total score (pemotions sub-scores happiness, fear, anger, sadness (pemotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all pemotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). Conclusions PD patients

  7. Cross-linguistic emotion recognition : Dutch, Korean and American English

    NARCIS (Netherlands)

    Choi, J.P.; Broersma, M.; Goudbeek, M.B.

    2012-01-01

    This study investigates the occurrence of asymmetries in cross-linguistic recognition of emotion in speech. Theories on emotion recognition do not consider asymmetries in the cross-linguistic recognition of emotion. To study perceptual asymmetries, a fully crossed design was used, with speakers and

  8. Face recognition: a model specific ability

    Directory of Open Access Journals (Sweden)

    Jeremy B Wilmer

    2014-10-01

    Full Text Available In our everyday lives, we view it as a matter of course that different people are good at different things. It can be surprising, in this context, to learn that most of what is known about cognitive ability variation across individuals concerns the broadest of all cognitive abilities, often labeled g. In contrast, our knowledge of specific abilities, those that correlate little with g, is severely constrained. Here, we draw upon our experience investigating an exceptionally specific ability, face recognition, to make the case that many specific abilities could easily have been missed. In making this case, we derive key insights from earlier false starts in the measurement of face recognition’s variation across individuals, and we highlight the convergence of factors that enabled the recent discovery that this variation is specific. We propose that the case of face recognition ability illustrates a set of tools and perspectives that could accelerate fruitful work on specific cognitive abilities. By revealing relatively independent dimensions of human ability, such work would enhance our capacity to understand the uniqueness of individual minds.

  9. From Emotion Recognition to Website Customizations

    Directory of Open Access Journals (Sweden)

    O. B. Efremides

    2016-07-01

    Full Text Available A computer vision system that recognizes the emotions of a website’s user and customizes the context and the presentation of this website accordingly is presented herein. A logistic regression classifiers is trained over the Extended Cohn- Kanade dataset in order to recognize the emotions. The Scale- Invariant Feature Transform algorithm over two different part of an image, the face and the eyes without any special pixel intensities preprocessing, is used to describe each emotion. The testing phase shows a significant improvement in the classification results. A toy web site, as a proof of concept, is also developed.

  10. Locally Linear Discriminate Embedding for Face Recognition

    Directory of Open Access Journals (Sweden)

    Eimad E. Abusham

    2009-01-01

    Full Text Available A novel method based on the local nonlinear mapping is presented in this research. The method is called Locally Linear Discriminate Embedding (LLDE. LLDE preserves a local linear structure of a high-dimensional space and obtains a compact data representation as accurately as possible in embedding space (low dimensional before recognition. For computational simplicity and fast processing, Radial Basis Function (RBF classifier is integrated with the LLDE. RBF classifier is carried out onto low-dimensional embedding with reference to the variance of the data. To validate the proposed method, CMU-PIE database has been used and experiments conducted in this research revealed the efficiency of the proposed methods in face recognition, as compared to the linear and non-linear approaches.

  11. Efficient Recognition of Human Faces from Video in Particle Filter

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Face recognition from video requires dealing with uncertainty both in tracking and recognition. This paper proposed an effective method for face recognition from video. In order to realize simultaneous tracking and recognition, fisherface-based recognition is combined with tracking into one model. This model is then embedded into particle filter to perform face recognition from video. In order to improve the robustness of tracking, an expectation maximization (EM) algorithm was adopted to update the appearance model. The experimental results show that the proposed method can perform well in tracking and recognition even in poor conditions such as occlusion and remarkable change in lighting.

  12. Facial emotion recognition in bipolar disorder: a critical review.

    Science.gov (United States)

    Rocca, Cristiana Castanho de Almeida; Heuvel, Eveline van den; Caetano, Sheila C; Lafer, Beny

    2009-06-01

    Literature review of the controlled studies in the last 18 years in emotion recognition deficits in bipolar disorder. A bibliographical research of controlled studies with samples larger than 10 participants from 1990 to June 2008 was completed in Medline, Lilacs, PubMed and ISI. Thirty-two papers were evaluated. Euthymic bipolar disorder presented impairment in recognizing disgust and fear. Manic BD showed difficult to recognize fearful and sad faces. Pediatric bipolar disorder patients and children at risk presented impairment in their capacity to recognize emotions in adults and children faces. Bipolar disorder patients were more accurate in recognizing facial emotions than schizophrenic patients. Bipolar disorder patients present impaired recognition of disgust, fear and sadness that can be partially attributed to mood-state. In mania, they have difficult to recognize fear and disgust. Bipolar disorder patients were more accurate in recognizing emotions than depressive and schizophrenic patients. Bipolar disorder children present a tendency to misjudge extreme facial expressions as being moderate or mild in intensity. Affective and cognitive deficits in bipolar disorder vary according to the mood states. Follow-up studies re-testing bipolar disorder patients after recovery are needed in order to investigate if these abnormalities reflect a state or trait marker and can be considered an endophenotype. Future studies should aim at standardizing task and designs.

  13. Evidence for altered amygdala activation in schizophrenia in an adaptive emotion recognition task.

    Science.gov (United States)

    Mier, Daniela; Lis, Stefanie; Zygrodnik, Karina; Sauer, Carina; Ulferts, Jens; Gallhofer, Bernd; Kirsch, Peter

    2014-03-30

    Deficits in social cognition seem to present an intermediate phenotype for schizophrenia, and are known to be associated with an altered amygdala response to faces. However, current results are heterogeneous with respect to whether this altered amygdala response in schizophrenia is hypoactive or hyperactive in nature. The present study used functional magnetic resonance imaging to investigate emotion-specific amygdala activation in schizophrenia using a novel adaptive emotion recognition paradigm. Participants comprised 11 schizophrenia outpatients and 16 healthy controls who viewed face stimuli expressing emotions of anger, fear, happiness, and disgust, as well as neutral expressions. The adaptive emotion recognition approach allows the assessment of group differences in both emotion recognition performance and associated neuronal activity while also ensuring a comparable number of correctly recognized emotions between groups. Schizophrenia participants were slower and had a negative bias in emotion recognition. In addition, they showed reduced differential activation during recognition of emotional compared with neutral expressions. Correlation analyses revealed an association of a negative bias with amygdala activation for neutral facial expressions that was specific to the patient group. We replicated previous findings of affected emotion recognition in schizophrenia. Furthermore, we demonstrated that altered amygdala activation in the patient group was associated with the occurrence of a negative bias. These results provide further evidence for impaired social cognition in schizophrenia and point to a central role of the amygdala in negative misperceptions of facial stimuli in schizophrenia.

  14. When family looks strange and strangers look normal: a case of impaired face perception and recognition after stroke.

    Science.gov (United States)

    Heutink, Joost; Brouwer, Wiebo H; Kums, Evelien; Young, Andy; Bouma, Anke

    2012-02-01

    We describe a patient (JS) with impaired recognition and distorted visual perception of faces after an ischemic stroke. Strikingly, JS reports that the faces of family members look distorted, while faces of other people look normal. After neurological and neuropsychological examination, we assessed response accuracy, response times, and skin conductance responses on a face recognition task in which photographs of close family members, celebrities and unfamiliar people were presented. JS' performance was compared to the performance of three healthy control participants. Results indicate that three aspects of face perception appear to be impaired in JS. First, she has impaired recognition of basic emotional expressions. Second, JS has poor recognition of familiar faces in general, but recognition of close family members is disproportionally impaired compared to faces of celebrities. Third, JS perceives faces of family members as distorted. In this paper we consider whether these impairments can be interpreted in terms of previously described disorders of face perception and recent models for face perception.

  15. A Massively Parallel Face Recognition System

    Directory of Open Access Journals (Sweden)

    Lahdenoja Olli

    2007-01-01

    Full Text Available We present methods for processing the LBPs (local binary patterns with a massively parallel hardware, especially with CNN-UM (cellular nonlinear network-universal machine. In particular, we present a framework for implementing a massively parallel face recognition system, including a dedicated highly accurate algorithm suitable for various types of platforms (e.g., CNN-UM and digital FPGA. We study in detail a dedicated mixed-mode implementation of the algorithm and estimate its implementation cost in the view of its performance and accuracy restrictions.

  16. A Massively Parallel Face Recognition System

    Directory of Open Access Journals (Sweden)

    Ari Paasio

    2006-12-01

    Full Text Available We present methods for processing the LBPs (local binary patterns with a massively parallel hardware, especially with CNN-UM (cellular nonlinear network-universal machine. In particular, we present a framework for implementing a massively parallel face recognition system, including a dedicated highly accurate algorithm suitable for various types of platforms (e.g., CNN-UM and digital FPGA. We study in detail a dedicated mixed-mode implementation of the algorithm and estimate its implementation cost in the view of its performance and accuracy restrictions.

  17. 2DPCA versus PCA for face recognition

    Institute of Scientific and Technical Information of China (English)

    HU Jian-jun; TAN Guan-zheng; LUAN Feng-gang; A. S. M. LIBDA

    2015-01-01

    Dimensionality reduction methods play an important role in face recognition. Principal component analysis (PCA) and two-dimensional principal component analysis (2DPCA) are two kinds of important methods in this field. Recent research seems like that 2DPCA method is superior to PCA method. To prove if this conclusion is always true, a comprehensive comparison study between PCA and 2DPCA methods was carried out. A novel concept, called column-image difference (CID), was proposed to analyze the difference between PCA and 2DPCA methods in theory. It is found that there exist some restrictive conditions when 2DPCA outperforms PCA. After theoretical analysis, the experiments were conducted on four famous face image databases. The experiment results confirm the validity of theoretical claim.

  18. Face Recognition using Segmental Euclidean Distance

    Directory of Open Access Journals (Sweden)

    Farrukh Sayeed

    2011-09-01

    Full Text Available In this paper an attempt has been made to detect the face using the combination of integral image along with the cascade structured classifier which is built using Adaboost learning algorithm. The detected faces are then passed through a filtering process for discarding the non face regions. They are individually split up into five segments consisting of forehead, eyes, nose, mouth and chin. Each segment is considered as a separate image and Eigenface also called principal component analysis (PCA features of each segment is computed. The faces having a slight pose are also aligned for proper segmentation. The test image is also segmented similarly and its PCA features are found. The segmental Euclidean distance classifier is used for matching the test image with the stored one. The success rate comes out to be 88 per cent on the CG(full database created from the databases of California Institute and Georgia Institute. However the performance of this approach on ORL(full database with the same features is only 70 per cent. For the sake of comparison, DCT(full and fuzzy features are tried on CG and ORL databases but using a well known classifier, support vector machine (SVM. Results of recognition rate with DCT features on SVM classifier are increased by 3 per cent over those due to PCA features and Euclidean distance classifier on the CG database. The results of recognition are improved to 96 per cent with fuzzy features on ORL database with SVM.Defence Science Journal, 2011, 61(5, pp.431-442, DOI:http://dx.doi.org/10.14429/dsj.61.1178

  19. Study of Different Face Recognition Algorithms and Challenges

    Directory of Open Access Journals (Sweden)

    Uma Shankar Kurmi

    2014-03-01

    Full Text Available At present face recognition has wide area of applications such as security, law enforcement. Imaging conditions, Orientation, Pose and presence of occlusion are huge problems associated with face recognition. The performance of face recognition systems decreases due to these problems. Discriminant Analysis (LDA or Principal Components Analysis (PCA is used to get better recognition results. Human face contains relevant information that can extracted from face model developed by PCA technique. Principal Components Analysis method uses eigenface approach to describe face image variation. A face recognition technique that is robust to all situations is not available. Some techniques are better in case of illumination, some for pose problem and some for occlusion problem. This paper presents some algorithms for face recognition.

  20. Face recognition from a moving platform via sparse representation

    Science.gov (United States)

    Hsu, Ming Kai; Hsu, Charles; Lee, Ting N.; Szu, Harold

    2012-06-01

    A video-based surveillance system for passengers includes face detection, face tracking and face recognition. In general, the final recognition result of the video-based surveillance system is usually determined by the cumulative recognition results. Under this strategy, the correctness of face tracking plays an important role for the system recognition rate. For face tracking, the challenges of face tracking on a moving platform are that the space and time information used for conventional face tracking algorithms may be lost. Consequently, conventional face tracking algorithms can barely handle the face tracking on a moving platform. In this paper, we have verified the state-of-the-art technologies for face detection, face tracking and face recognition on a moving platform. In the mean time, we also proposed a new strategy for face tracking on a moving platform or face tracking under very low frame rate. The steps of the new strategy for face detection are: (1) classification the detected faces over a certain period instead of every frame (2) Tracking of each passenger is equivalent to reconstruct the time order of certain period for each passenger. If the cumulative recognition results are the only part needed for the surveillance system, step 2 can be skipped. In addition, if the additional information from the passengers is required, such as path tracking, lip read, gesture recognition, etc, time order reconstruction in step 2 can offer the information required.

  1. Development of Perceptual Expertise in Emotion Recognition

    Science.gov (United States)

    Pollak, Seth D.; Messner, Michael; Kistler, Doris J.; Cohn, Jeffrey F.

    2009-01-01

    How do children's early social experiences influence their perception of emotion-specific information communicated by the face? To examine this question, we tested a group of abused children who had been exposed to extremely high levels of parental anger expression and physical threat. Children were presented with arrays of stimuli that depicted…

  2. Impairments in negative emotion recognition and empathy for pain in Huntington's disease families.

    Science.gov (United States)

    Baez, Sandra; Herrera, Eduar; Gershanik, Oscar; Garcia, Adolfo M; Bocanegra, Yamile; Kargieman, Lucila; Manes, Facundo; Ibanez, Agustin

    2015-02-01

    Lack of empathy and emotional disturbances are prominent clinical features of Huntington's disease (HD). While emotion recognition impairments in HD patients are well established, there are no experimental designs assessing empathy in this population. The present study seeks to cover such a gap in the literature. Eighteen manifest HD patients, 19 first-degree asymptomatic relatives, and 36 healthy control participants completed two emotion-recognition tasks with different levels of contextual dependence. They were also evaluated with an empathy-for-pain task tapping the perception of intentional and accidental harm. Moreover, we explored potential associations among empathy, emotion recognition, and other relevant factors - e.g., executive functions (EF). The results showed that both HD patients and asymptomatic relatives are impaired in the recognition of negative emotions from isolated faces. However, their performance in emotion recognition was normal in the presence of contextual cues. HD patients also showed subtle empathy impairments. There were no significant correlations between EF, empathy, and emotion recognition measures in either HD patients or relatives. In controls, EF was positively correlated with emotion recognition. Furthermore, emotion recognition was positively correlated with the performance in the empathy task. Our findings highlight the preserved cognitive abilities in HD families when using more ecological tasks displaying emotional expressions in the context in which they typically appear. Moreover, our results suggest that emotion recognition impairments may constitute a potential biomarker of HD onset and progression. These results contribute to the understanding of emotion recognition and empathy deficits observed in HD and have important theoretical and clinical implications.

  3. The right place at the right time: priming facial expressions with emotional face components in developmental visual agnosia.

    Science.gov (United States)

    Aviezer, Hillel; Hassin, Ran R; Perry, Anat; Dudarev, Veronica; Bentin, Shlomo

    2012-04-01

    The current study examined the nature of deficits in emotion recognition from facial expressions in case LG, an individual with a rare form of developmental visual agnosia (DVA). LG presents with profoundly impaired recognition of facial expressions, yet the underlying nature of his deficit remains unknown. During typical face processing, normal sighted individuals extract information about expressed emotions from face regions with activity diagnostic for specific emotion categories. Given LG's impairment, we sought to shed light on his emotion perception by examining if priming facial expressions with diagnostic emotional face components would facilitate his recognition of the emotion expressed by the face. LG and control participants matched isolated face components with components appearing in a subsequently presented full-face and then categorized the face's emotion. Critically, the matched components were from regions which were diagnostic or non-diagnostic of the emotion portrayed by the full face. In experiment 1, when the full faces were briefly presented (150 ms), LG's performance was strongly influenced by the diagnosticity of the components: his emotion recognition was boosted within normal limits when diagnostic components were used and was obliterated when non-diagnostic components were used. By contrast, in experiment 2, when the face-exposure duration was extended (2000 ms), the beneficial effect of the diagnostic matching was diminished as was the detrimental effect of the non-diagnostic matching. These data highlight the impact of diagnostic facial features in normal expression recognition and suggest that impaired emotion recognition in DVA results from deficient visual integration across diagnostic face components.

  4. Face Recognition by Metropolitan Police Super-Recognisers

    OpenAIRE

    2016-01-01

    Face recognition is used to prove identity across a wide variety of settings. Despite this, research consistently shows that people are typically rather poor at matching faces to photos. Some professional groups, such as police and passport officers, have been shown to perform just as poorly as the general public on standard tests of face recognition. However, face recognition skills are subject to wide individual variation, with some people showing exceptional ability-a group that has come t...

  5. Individual differences in holistic processing predict face recognition ability.

    Science.gov (United States)

    Wang, Ruosi; Li, Jingguang; Fang, Huizhen; Tian, Moqian; Liu, Jia

    2012-02-01

    Why do some people recognize faces easily and others frequently make mistakes in recognizing faces? Classic behavioral work has shown that faces are processed in a distinctive holistic manner that is unlike the processing of objects. In the study reported here, we investigated whether individual differences in holistic face processing have a significant influence on face recognition. We found that the magnitude of face-specific recognition accuracy correlated with the extent to which participants processed faces holistically, as indexed by the composite-face effect and the whole-part effect. This association is due to face-specific processing in particular, not to a more general aspect of cognitive processing, such as general intelligence or global attention. This finding provides constraints on computational models of face recognition and may elucidate mechanisms underlying cognitive disorders, such as prosopagnosia and autism, that are associated with deficits in face recognition.

  6. Superior recognition performance for happy masked and unmasked faces in both younger and older adults.

    Directory of Open Access Journals (Sweden)

    Joakim eSvard

    2012-11-01

    Full Text Available In the aging literature it has been shown that even though emotion recognition performance decreases with age, the decrease is less for happiness than other facial expressions. Studies in younger adults have also revealed that happy faces are more strongly attended to and better recognized than other emotional facial expressions. Thus, there might be a more age independent happy face advantage in facial expression recognition. By using a backward masking paradigm and varying stimulus onset asynchronies (17–267 ms the temporal development of a happy face advantage, on a continuum from low to high levels of visibility, was examined in younger and older adults. Results showed that across age groups, recognition performance for happy faces was better than for neutral and fearful faces at durations longer than 50 ms. Importantly, the results showed a happy face advantage already during early processing of emotional faces in both younger and older adults. This advantage is discussed in terms of processing of salient perceptual features and elaborative processing of the happy face. We also investigate the combined effect of age and neuroticism on emotional face processing. The rationale was previous findings of age related differences in physiological arousal to emotional pictures and a relation between arousal and neuroticism. Across all durations, there was an interaction between age and neuroticism, showing that being high in neuroticism might be disadvantageous for younger, but not older adults’ emotion recognition performance during arousal enhancing tasks. These results indicate that there is a relation between aging, neuroticism, and performance, potentially related to physiological arousal.

  7. Temporal Lobe Structures and Facial Emotion Recognition in Schizophrenia Patients and Nonpsychotic Relatives

    Science.gov (United States)

    Goghari, Vina M.; MacDonald, Angus W.; Sponheim, Scott R.

    2011-01-01

    Temporal lobe abnormalities and emotion recognition deficits are prominent features of schizophrenia and appear related to the diathesis of the disorder. This study investigated whether temporal lobe structural abnormalities were associated with facial emotion recognition deficits in schizophrenia and related to genetic liability for the disorder. Twenty-seven schizophrenia patients, 23 biological family members, and 36 controls participated. Several temporal lobe regions (fusiform, superior temporal, middle temporal, amygdala, and hippocampus) previously associated with face recognition in normative samples and found to be abnormal in schizophrenia were evaluated using volumetric analyses. Participants completed a facial emotion recognition task and an age recognition control task under time-limited and self-paced conditions. Temporal lobe volumes were tested for associations with task performance. Group status explained 23% of the variance in temporal lobe volume. Left fusiform gray matter volume was decreased by 11% in patients and 7% in relatives compared with controls. Schizophrenia patients additionally exhibited smaller hippocampal and middle temporal volumes. Patients were unable to improve facial emotion recognition performance with unlimited time to make a judgment but were able to improve age recognition performance. Patients additionally showed a relationship between reduced temporal lobe gray matter and poor facial emotion recognition. For the middle temporal lobe region, the relationship between greater volume and better task performance was specific to facial emotion recognition and not age recognition. Because schizophrenia patients exhibited a specific deficit in emotion recognition not attributable to a generalized impairment in face perception, impaired emotion recognition may serve as a target for interventions. PMID:20484523

  8. Impaired face recognition is associated with social inhibition.

    Science.gov (United States)

    Avery, Suzanne N; VanDerKlok, Ross M; Heckers, Stephan; Blackford, Jennifer U

    2016-02-28

    Face recognition is fundamental to successful social interaction. Individuals with deficits in face recognition are likely to have social functioning impairments that may lead to heightened risk for social anxiety. A critical component of social interaction is how quickly a face is learned during initial exposure to a new individual. Here, we used a novel Repeated Faces task to assess how quickly memory for faces is established. Face recognition was measured over multiple exposures in 52 young adults ranging from low to high in social inhibition, a core dimension of social anxiety. High social inhibition was associated with a smaller slope of change in recognition memory over repeated face exposure, indicating participants with higher social inhibition showed smaller improvements in recognition memory after seeing faces multiple times. We propose that impaired face learning is an important mechanism underlying social inhibition and may contribute to, or maintain, social anxiety.

  9. Fearful contextual expression impairs the encoding and recognition of target faces: an ERP study

    Directory of Open Access Journals (Sweden)

    Huiyan eLin

    2015-09-01

    Full Text Available Previous event-related potential (ERP studies have shown that the N170 to faces is modulated by the emotion of the face and its context. However, it is unclear how the encoding of emotional target faces as reflected in the N170 is modulated by the preceding contextual facial expression when temporal onset and identity of target faces are unpredictable. In addition, no study as yet has investigated whether contextual facial expression modulates later recognition of target faces. To address these issues, participants in the present study were asked to identify target faces (fearful or neutral that were presented after a sequence of fearful or neutral contextual faces. The number of sequential contextual faces was random and contextual and target faces were of different identities so that temporal onset and identity of target faces were unpredictable. Electroencephalography (EEG data was recorded during the encoding phase. Subsequently, participants had to perform an unexpected old/new recognition task in which target face identities were presented in either the encoded or the non-encoded expression. ERP data showed a reduced N170 to target faces in fearful as compared to neutral context regardless of target facial expression. In the later recognition phase, recognition rates were reduced for target faces in the encoded expression when they had been encountered in fearful as compared to neutral context. The present findings suggest that fearful compared to neutral contextual faces reduce the allocation of attentional resources towards target faces, which results in limited encoding and recognition of target faces.

  10. Fearful contextual expression impairs the encoding and recognition of target faces: an ERP study

    Science.gov (United States)

    Lin, Huiyan; Schulz, Claudia; Straube, Thomas

    2015-01-01

    Previous event-related potential (ERP) studies have shown that the N170 to faces is modulated by the emotion of the face and its context. However, it is unclear how the encoding of emotional target faces as reflected in the N170 is modulated by the preceding contextual facial expression when temporal onset and identity of target faces are unpredictable. In addition, no study as yet has investigated whether contextual facial expression modulates later recognition of target faces. To address these issues, participants in the present study were asked to identify target faces (fearful or neutral) that were presented after a sequence of fearful or neutral contextual faces. The number of sequential contextual faces was random and contextual and target faces were of different identities so that temporal onset and identity of target faces were unpredictable. Electroencephalography (EEG) data was recorded during the encoding phase. Subsequently, participants had to perform an unexpected old/new recognition task in which target face identities were presented in either the encoded or the non-encoded expression. ERP data showed a reduced N170 to target faces in fearful as compared to neutral context regardless of target facial expression. In the later recognition phase, recognition rates were reduced for target faces in the encoded expression when they had been encountered in fearful as compared to neutral context. The present findings suggest that fearful compared to neutral contextual faces reduce the allocation of attentional resources towards target faces, which results in limited encoding and recognition of target faces. PMID:26388751

  11. Random-Profiles-Based 3D Face Recognition System

    Directory of Open Access Journals (Sweden)

    Joongrock Kim

    2014-03-01

    Full Text Available In this paper, a noble nonintrusive three-dimensional (3D face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation.

  12. Emotion Recognition in Children with Autism Spectrum Disorders: Relations to Eye Gaze and Autonomic State

    Science.gov (United States)

    Bal, Elgiz; Harden, Emily; Lamb, Damon; Van Hecke, Amy Vaughan; Denver, John W.; Porges, Stephen W.

    2010-01-01

    Respiratory Sinus Arrhythmia (RSA), heart rate, and accuracy and latency of emotion recognition were evaluated in children with autism spectrum disorders (ASD) and typically developing children while viewing videos of faces slowly transitioning from a neutral expression to one of six basic emotions (e.g., anger, disgust, fear, happiness, sadness,…

  13. 3D face recognition algorithm based on detecting reliable components

    Institute of Scientific and Technical Information of China (English)

    Huang Wenjun; Zhou Xuebing; Niu Xiamu

    2007-01-01

    Fisherfaces algorithm is a popular method for face recognition. However, there exist some unstable components that degrade recognition performance. In this paper, we propose a method based on detecting reliable components to overcome the problem and introduce it to 3D face recognition. The reliable components are detected within the binary feature vector, which is generated from the Fisherfaces feature vector based on statistical properties, and is used for 3D face recognition as the final feature vector. Experimental results show that the reliable components feature vector is much more effective than the Fisherfaces feature vector for face recognition.

  14. Comparison of Emotion Recognition and Mind Reading Abilities in Opium Abusers and Healthy Matched Individuals

    Directory of Open Access Journals (Sweden)

    Vahid Nejati

    2012-05-01

    Full Text Available Introduction: The purpose of this study is to compare the emotion recognition and mind reading in opium abusers and healthy individuals. Method: In this causative-comparative study, with a non probability sampling method, 30 opium abusers compared with 30 healthy individuals that were matched in sex and education. Neurocognitive tests of reading mind from eyes and emotion recognition from face were used for evaluation. Independent T-Test was used for analysis. Findings: The results showed that opium abusers had significantly lower abilities in mind reading than healthy matched individuals. Also opium abusers had significantly lower performance in recognition of emotional experience of happy, sad and angry faces. Conclusion: Based on weak performance of mind reading and emotion recognition in addicts, it is advised that social cognition evaluation considered in drug abusers evaluation. Future interventional study could propose social cognition rehabilitation programs for addicts.

  15. Prevalence of face recognition deficits in middle childhood.

    Science.gov (United States)

    Bennetts, Rachel J; Murray, Ebony; Boyce, Tian; Bate, Sarah

    2017-02-01

    Approximately 2-2.5% of the adult population is believed to show severe difficulties with face recognition, in the absence of any neurological injury-a condition known as developmental prosopagnosia (DP). However, to date no research has attempted to estimate the prevalence of face recognition deficits in children, possibly because there are very few child-friendly, well-validated tests of face recognition. In the current study, we examined face and object recognition in a group of primary school children (aged 5-11 years), to establish whether our tests were suitable for children and to provide an estimate of face recognition difficulties in children. In Experiment 1 (n = 184), children completed a pre-existing test of child face memory, the Cambridge Face Memory Test-Kids (CFMT-K), and a bicycle test with the same format. In Experiment 2 (n = 413), children completed three-alternative forced-choice matching tasks with faces and bicycles. All tests showed good psychometric properties. The face and bicycle tests were well matched for difficulty and showed a similar developmental trajectory. Neither the memory nor the matching tests were suitable to detect impairments in the youngest groups of children, but both tests appear suitable to screen for face recognition problems in middle childhood. In the current sample, 1.2-5.2% of children showed difficulties with face recognition; 1.2-4% showed face-specific difficulties-that is, poor face recognition with typical object recognition abilities. This is somewhat higher than previous adult estimates: It is possible that face matching tests overestimate the prevalence of face recognition difficulties in children; alternatively, some children may "outgrow" face recognition difficulties.

  16. Aging and the perception of emotion: processing vocal expressions alone and with faces.

    Science.gov (United States)

    Ryan, Melissa; Murray, Janice; Ruffman, Ted

    2010-01-01

    This study investigated whether the difficulties older adults experience when recognizing specific emotions from facial expressions also occur with vocal expressions of emotion presented in isolation or in combination with facial expressions. When matching vocal expressions of six emotions to emotion labels, older adults showed worse performance on sadness and anger. When matching vocal expressions to facial expressions, older adults showed worse performance on sadness, anger, happiness, and fear. Older adults' poorer performance when matching faces to voices was independent of declines in fluid ability. Results are interpreted with reference to the neuropsychology of emotion recognition and the aging brain.

  17. Comparison of Emotion Recognition from Facial Expression and Music

    OpenAIRE

    Gašpar, Tina; Labor, Marina; Jurić, Iva; Dumančić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recogni...

  18. Comparison of Emotion Recognition from Facial Expression and Music

    OpenAIRE

    Gašpar, Tina; Labor, Marina; Jurić, Iva; Dumančić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recogni...

  19. Brain correlates of musical and facial emotion recognition: evidence from the dementias.

    Science.gov (United States)

    Hsieh, S; Hornberger, M; Piguet, O; Hodges, J R

    2012-07-01

    The recognition of facial expressions of emotion is impaired in semantic dementia (SD) and is associated with right-sided brain atrophy in areas known to be involved in emotion processing, notably the amygdala. Whether patients with SD also experience difficulty recognizing emotions conveyed by other media, such as music, is unclear. Prior studies have used excerpts of known music from classical or film repertoire but not unfamiliar melodies designed to convey distinct emotions. Patients with SD (n = 11), Alzheimer's disease (n = 12) and healthy control participants (n = 20) underwent tests of emotion recognition in two modalities: unfamiliar musical tunes and unknown faces as well as volumetric MRI. Patients with SD were most impaired with the recognition of facial and musical emotions, particularly for negative emotions. Voxel-based morphometry showed that the labelling of emotions, regardless of modality, correlated with the degree of atrophy in the right temporal pole, amygdala and insula. The recognition of musical (but not facial) emotions was also associated with atrophy of the left anterior and inferior temporal lobe, which overlapped with regions correlating with standardized measures of verbal semantic memory. These findings highlight the common neural substrates supporting the processing of emotions by facial and musical stimuli but also indicate that the recognition of emotions from music draws upon brain regions that are associated with semantics in language. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. Modulation of the composite face effect by unintended emotion cues.

    Science.gov (United States)

    Gray, Katie L H; Murphy, Jennifer; Marsh, Jade E; Cook, Richard

    2017-04-01

    When upper and lower regions from different emotionless faces are aligned to form a facial composite, observers 'fuse' the two halves together, perceptually. The illusory distortion induced by task-irrelevant ('distractor') halves hinders participants' judgements about task-relevant ('target') halves. This composite-face effect reveals a tendency to integrate feature information from disparate regions of intact upright faces, consistent with theories of holistic face processing. However, observers frequently perceive emotion in ostensibly neutral faces, contrary to the intentions of experimenters. This study sought to determine whether this 'perceived emotion' influences the composite-face effect. In our first experiment, we confirmed that the composite effect grows stronger as the strength of distractor emotion increased. Critically, effects of distractor emotion were induced by weak emotion intensities, and were incidental insofar as emotion cues hindered image matching, not emotion labelling per se. In Experiment 2, we found a correlation between the presence of perceived emotion in a set of ostensibly neutral distractor regions sourced from commonly used face databases, and the strength of illusory distortion they induced. In Experiment 3, participants completed a sequential matching composite task in which half of the distractor regions were rated high and low for perceived emotion, respectively. Significantly stronger composite effects were induced by the high-emotion distractor halves. These convergent results suggest that perceived emotion increases the strength of the composite-face effect induced by supposedly emotionless faces. These findings have important implications for the study of holistic face processing in typical and atypical populations.

  1. Effects of pose and image resolution on automatic face recognition

    NARCIS (Netherlands)

    Mahmood, Zahid; Ali, Tauseef; Khan, Samee U.

    The popularity of face recognition systems have increased due to their use in widespread applications. Driven by the enormous number of potential application domains, several algorithms have been proposed for face recognition. Face pose and image resolutions are among the two important factors that

  2. Effects of pose and image resolution on automatic face recognition

    NARCIS (Netherlands)

    Mahmood, Zahid; Ali, Tauseef; Khan, Samee U.

    2015-01-01

    The popularity of face recognition systems have increased due to their use in widespread applications. Driven by the enormous number of potential application domains, several algorithms have been proposed for face recognition. Face pose and image resolutions are among the two important factors that

  3. Direct Gaze Modulates Face Recognition in Young Infants

    Science.gov (United States)

    Farroni, Teresa; Massaccesi, Stefano; Menon, Enrica; Johnson, Mark H.

    2007-01-01

    From birth, infants prefer to look at faces that engage them in direct eye contact. In adults, direct gaze is known to modulate the processing of faces, including the recognition of individuals. In the present study, we investigate whether direction of gaze has any effect on face recognition in four-month-old infants. Four-month infants were shown…

  4. Expression modeling for expression-invariant face recognition

    NARCIS (Netherlands)

    Haar, F.B. Ter; Veltkamp, R.C.

    2010-01-01

    Morphable face models have proven to be an effective tool for 3D face modeling and face recognition, but the extension to 3D face scans with expressions is still a challenge. The two main difficulties are (1) how to build a new morphable face model that deals with expressions, and (2) how to fit thi

  5. Conscious and Non-conscious Representations of Emotional Faces in Asperger's Syndrome.

    Science.gov (United States)

    Chien, Vincent S C; Tsai, Arthur C; Yang, Han Hsuan; Tseng, Yi-Li; Savostyanov, Alexander N; Liou, Michelle

    2016-07-31

    Several neuroimaging studies have suggested that the low spatial frequency content in an emotional face mainly activates the amygdala, pulvinar, and superior colliculus especially with fearful faces(1-3). These regions constitute the limbic structure in non-conscious perception of emotions and modulate cortical activity either directly or indirectly(2). In contrast, the conscious representation of emotions is more pronounced in the anterior cingulate, prefrontal cortex, and somatosensory cortex for directing voluntary attention to details in faces(3,4). Asperger's syndrome (AS)(5,6) represents an atypical mental disturbance that affects sensory, affective and communicative abilities, without interfering with normal linguistic skills and intellectual ability. Several studies have found that functional deficits in the neural circuitry important for facial emotion recognition can partly explain social communication failure in patients with AS(7-9). In order to clarify the interplay between conscious and non-conscious representations of emotional faces in AS, an EEG experimental protocol is designed with two tasks involving emotionality evaluation of either photograph or line-drawing faces. A pilot study is introduced for selecting face stimuli that minimize the differences in reaction times and scores assigned to facial emotions between the pretested patients with AS and IQ/gender-matched healthy controls. Information from the pretested patients was used to develop the scoring system used for the emotionality evaluation. Research into facial emotions and visual stimuli with different spatial frequency contents has reached discrepant findings depending on the demographic characteristics of participants and task demands(2). The experimental protocol is intended to clarify deficits in patients with AS in processing emotional faces when compared with healthy controls by controlling for factors unrelated to recognition of facial emotions, such as task difficulty, IQ and

  6. [Neurobiological basis of human recognition of facial emotion].

    Science.gov (United States)

    Mikhaĭlova, E S

    2005-01-01

    In the review of modern data and ideas concerning the neurophysiological mechanisms and morphological foundations of the most essential communicative function of humans and monkeys, that of recognition of faces and their emotional expressions, the attention is focussed on its dynamic realization and structural provision. On the basis of literature data about hemodynamic and metabolic mapping of the brain the author analyses the role of different zones of the ventral and dorsal visual cortical pathway, the frontal neocortex and amigdala in the facial features processing, as well as the specificity of this processing at each level. Special attention is given to the module principle of the facial processing in the temporal cortex. The dynamic characteristics of facial recognition are discussed according to the electrical evoked response data in healthy and disease humans and monkeys. Modern evidences on the role of different brain structures in the generation of successive evoked response waves in connection with successive stages of facial processing are analyzed. The similarity and differences between mechanisms of recognition of faces and their emotional expression are also considered.

  7. Familiar smiling faces in Alzheimer's disease: understanding the positivity-related recognition bias.

    Science.gov (United States)

    Werheid, Katja; McDonald, Rebecca S; Simmons-Stern, Nicholas; Ally, Brandon A; Budson, Andrew E

    2011-08-01

    Recent research has revealed a recognition bias favoring positive faces and other stimuli in older compared to younger adults. However, it is yet unclear whether this bias reflects an age-related preference for positive emotional stimuli, or an affirmatory bias used to compensate for episodic memory deficits. To follow up this point, the present study examined recognition of emotional faces and current mood state in patients with mild Alzheimer disease (AD) and healthy controls. Expecting lower overall memory performance, more negative and less positive mood in AD patients, the critical question was whether the positivity-related recognition bias would be increased compared to cognitively unimpaired controls. Eighteen AD patients and 18 healthy controls studied happy, neutral, and angry faces, which in a subsequent recognition task were intermixed with 50% distracter faces. As expected, the patient group showed reduced memory performance, along with a less positive and more negative mood. The recognition bias for positive faces persisted. This pattern supports the view that the positivity-induced recognition bias represents a compensatory, gist-based memory process that is applied when item-based recognition fails.

  8. Testing the effects of expression, intensity and age on emotional face processing in ASD.

    Science.gov (United States)

    Luyster, Rhiannon J; Bick, Johanna; Westerlund, Alissa; Nelson, Charles A

    2017-06-21

    Individuals with autism spectrum disorder (ASD) commonly show global deficits in the processing of facial emotion, including impairments in emotion recognition and slowed processing of emotional faces. Growing evidence has suggested that these challenges may increase with age, perhaps due to minimal improvement with age in individuals with ASD. In the present study, we explored the role of age, emotion type and emotion intensity in face processing for individuals with and without ASD. Twelve- and 18-22- year-old children with and without ASD participated. No significant diagnostic group differences were observed on behavioral measures of emotion processing for younger versus older individuals with and without ASD. However, there were significant group differences in neural responses to emotional faces. Relative to TD, at 12 years of age and during adulthood, individuals with ASD showed slower N170 to emotional faces. While the TD groups' P1 latency was significantly shorter in adults when compared to 12 year olds, there was no significant age-related difference in P1 latency among individuals with ASD. Findings point to potential differences in the maturation of cortical networks that support visual processing (whether of faces or stimuli more broadly), among individuals with and without ASD between late childhood and adulthood. Finally, associations between ERP amplitudes and behavioral responses on emotion processing tasks suggest possible neural markers for emotional and behavioral deficits among individuals with ASD. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Recognition of emotion in facial expression by people with Prader-Willi syndrome.

    Science.gov (United States)

    Whittington, J; Holland, T

    2011-01-01

    People with Prader-Willi syndrome (PWS) may have mild intellectual impairments but less is known about their social cognition. Most parents/carers report that people with PWS do not have normal peer relationships, although some have older or younger friends. Two specific aspects of social cognition are being able to recognise other people's emotion and to then respond appropriately. In a previous study, mothers/carers thought that 26% of children and 23% of adults with PWS would not respond to others' feelings. They also thought that 64% could recognise happiness, sadness, anger and fear and a further 30% could recognise happiness and sadness. However, reports of emotion recognition and response to emotion were partially dissociated. It was therefore decided to test facial emotion recognition directly. The participants were 58 people of all ages with PWS. They were shown a total of 20 faces, each depicting one of the six basic emotions and asked to say what they thought that person was feeling. The faces were shown one at a time in random order and each was accompanied by a reminder of the six basic emotions. This cohort of people with PWS correctly identified 55% of the different facial emotions. These included 90% of happy faces, 55% each of sad and surprised faces, 43% of disgusted faces, 40% of angry faces and 37% of fearful faces. Genetic subtype differences were found only in the predictors of recognition scores, not in the scores themselves. Selective impairment was found in fear recognition for those with PWS who had had a depressive illness and in anger recognition for those with PWS who had had a psychotic illness. The inability to read facial expressions of emotion is a deficit in social cognition apparent in people with PWS. This may be a contributing factor in their difficulties with peer relationships. © 2010 The Authors. Journal of Intellectual Disability Research © 2010 Blackwell Publishing Ltd.

  10. QUEST Hierarchy for Hyperspectral Face Recognition

    Directory of Open Access Journals (Sweden)

    David M. Ryer

    2012-01-01

    Full Text Available A qualia exploitation of sensor technology (QUEST motivated architecture using algorithm fusion and adaptive feedback loops for face recognition for hyperspectral imagery (HSI is presented. QUEST seeks to develop a general purpose computational intelligence system that captures the beneficial engineering aspects of qualia-based solutions. Qualia-based approaches are constructed from subjective representations and have the ability to detect, distinguish, and characterize entities in the environment Adaptive feedback loops are implemented that enhance performance by reducing candidate subjects in the gallery and by injecting additional probe images during the matching process. The architecture presented provides a framework for exploring more advanced integration strategies beyond those presented. Algorithmic results and performance improvements are presented as spatial, spectral, and temporal effects are utilized; additionally, a Matlab-based graphical user interface (GUI is developed to aid processing, track performance, and to display results.

  11. Theory of mind and its relationship with executive functions and emotion recognition in borderline personality disorder.

    Science.gov (United States)

    Baez, Sandra; Marengo, Juan; Perez, Ana; Huepe, David; Font, Fernanda Giralt; Rial, Veronica; Gonzalez-Gadea, María Luz; Manes, Facundo; Ibanez, Agustin

    2015-09-01

    Impaired social cognition has been claimed to be a mechanism underlying the development and maintenance of borderline personality disorder (BPD). One important aspect of social cognition is the theory of mind (ToM), a complex skill that seems to be influenced by more basic processes, such as executive functions (EF) and emotion recognition. Previous ToM studies in BPD have yielded inconsistent results. This study assessed the performance of BPD adults on ToM, emotion recognition, and EF tasks. We also examined whether EF and emotion recognition could predict the performance on ToM tasks. We evaluated 15 adults with BPD and 15 matched healthy controls using different tasks of EF, emotion recognition, and ToM. The results showed that BPD adults exhibited deficits in the three domains, which seem to be task-dependent. Furthermore, we found that EF and emotion recognition predicted the performance on ToM. Our results suggest that tasks that involve real-life social scenarios and contextual cues are more sensitive to detect ToM and emotion recognition deficits in BPD individuals. Our findings also indicate that (a) ToM variability in BPD is partially explained by individual differences on EF and emotion recognition; and (b) ToM deficits of BPD patients are partially explained by the capacity to integrate cues from face, prosody, gesture, and social context to identify the emotions and others' beliefs.

  12. Facial expression influences face identity recognition during the attentional blink.

    Science.gov (United States)

    Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J

    2014-12-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

  13. Facial emotion recognition impairments in individuals with HIV.

    Science.gov (United States)

    Clark, Uraina S; Cohen, Ronald A; Westbrook, Michelle L; Devlin, Kathryn N; Tashima, Karen T

    2010-11-01

    Characterized by frontostriatal dysfunction, human immunodeficiency virus (HIV) is associated with cognitive and psychiatric abnormalities. Several studies have noted impaired facial emotion recognition abilities in patient populations that demonstrate frontostriatal dysfunction; however, facial emotion recognition abilities have not been systematically examined in HIV patients. The current study investigated facial emotion recognition in 50 nondemented HIV-seropositive adults and 50 control participants relative to their performance on a nonemotional landscape categorization control task. We examined the relation of HIV-disease factors (nadir and current CD4 levels) to emotion recognition abilities and assessed the psychosocial impact of emotion recognition abnormalities. Compared to control participants, HIV patients performed normally on the control task but demonstrated significant impairments in facial emotion recognition, specifically for fear. HIV patients reported greater psychosocial impairments, which correlated with increased emotion recognition difficulties. Lower current CD4 counts were associated with poorer anger recognition. In summary, our results indicate that chronic HIV infection may contribute to emotion processing problems among HIV patients. We suggest that disruptions of frontostriatal structures and their connections with cortico-limbic networks may contribute to emotion recognition abnormalities in HIV. Our findings also highlight the significant psychosocial impact that emotion recognition abnormalities have on individuals with HIV.

  14. AN ADVANCED SCALE INVARIANT FEATURE TRANSFORM ALGORITHM FOR FACE RECOGNITION

    OpenAIRE

    Mohammad Mohsen Ahmadinejad; Elizabeth Sherly

    2016-01-01

    In computer vision, Scale-invariant feature transform (SIFT) algorithm is widely used to describe and detect local features in images due to its excellent performance. But for face recognition, the implementation of SIFT was complicated because of detecting false key-points in the face image due to irrelevant portions like hair style and other background details. This paper proposes an algorithm for face recognition to improve recognition accuracy by selecting relevant SIFT key-points only th...

  15. Impaired emotion recognition in music in Parkinson's disease

    NARCIS (Netherlands)

    van Tricht, M.J.; Smeding, H.M.M.; Speelman, J.D.; Schmand, B.A.

    2010-01-01

    Music has the potential to evoke strong emotions and plays a significant role in the lives of many people. Music might therefore be an ideal medium to assess emotion recognition. We investigated emotion recognition in music in 20 patients with idiopathic Parkinson’s disease (PD) and 20 matched healt

  16. Impaired Emotion Recognition in Music in Parkinson's Disease

    Science.gov (United States)

    van Tricht, Mirjam J.; Smeding, Harriet M. M.; Speelman, Johannes D.; Schmand, Ben A.

    2010-01-01

    Music has the potential to evoke strong emotions and plays a significant role in the lives of many people. Music might therefore be an ideal medium to assess emotion recognition. We investigated emotion recognition in music in 20 patients with idiopathic Parkinson's disease (PD) and 20 matched healthy volunteers. The role of cognitive dysfunction…

  17. Social Approach and Emotion Recognition in Fragile X Syndrome

    Science.gov (United States)

    Williams, Tracey A.; Porter, Melanie A.; Langdon, Robyn

    2014-01-01

    Evidence is emerging that individuals with Fragile X syndrome (FXS) display emotion recognition deficits, which may contribute to their significant social difficulties. The current study investigated the emotion recognition abilities, and social approachability judgments, of FXS individuals when processing emotional stimuli. Relative to…

  18. Impaired Emotion Recognition in Music in Parkinson's Disease

    Science.gov (United States)

    van Tricht, Mirjam J.; Smeding, Harriet M. M.; Speelman, Johannes D.; Schmand, Ben A.

    2010-01-01

    Music has the potential to evoke strong emotions and plays a significant role in the lives of many people. Music might therefore be an ideal medium to assess emotion recognition. We investigated emotion recognition in music in 20 patients with idiopathic Parkinson's disease (PD) and 20 matched healthy volunteers. The role of cognitive dysfunction…

  19. Impaired emotion recognition in music in Parkinson's disease

    NARCIS (Netherlands)

    van Tricht, M.J.; Smeding, H.M.M.; Speelman, J.D.; Schmand, B.A.

    2010-01-01

    Music has the potential to evoke strong emotions and plays a significant role in the lives of many people. Music might therefore be an ideal medium to assess emotion recognition. We investigated emotion recognition in music in 20 patients with idiopathic Parkinson’s disease (PD) and 20 matched healt

  20. Impact of severity of drug use on discrete emotions recognition in polysubstance abusers.

    Science.gov (United States)

    Fernández-Serrano, María José; Lozano, Oscar; Pérez-García, Miguel; Verdejo-García, Antonio

    2010-06-01

    Neuropsychological studies support the association between severity of drug intake and alterations in specific cognitive domains and neural systems, but there is disproportionately less research on the neuropsychology of emotional alterations associated with addiction. One of the key aspects of adaptive emotional functioning potentially relevant to addiction progression and treatment is the ability to recognize basic emotions in the faces of others. Therefore, the aims of this study were: (i) to examine facial emotion recognition in abstinent polysubstance abusers, and (ii) to explore the association between patterns of quantity and duration of use of several drugs co-abused (including alcohol, cannabis, cocaine, heroin and MDMA) and the ability to identify discrete facial emotional expressions portraying basic emotions. We compared accuracy of emotion recognition of facial expressions portraying six basic emotions (measured with the Ekman Faces Test) between polysubstance abusers (PSA, n=65) and non-drug using comparison individuals (NDCI, n=30), and used regression models to explore the association between quantity and duration of use of the different drugs co-abused and indices of recognition of each of the six emotions, while controlling for relevant socio-demographic and affect-related confounders. Results showed: (i) that PSA had significantly poorer recognition than NDCI for facial expressions of anger, disgust, fear and sadness; (ii) that measures of quantity and duration of drugs used significantly predicted poorer discrete emotions recognition: quantity of cocaine use predicted poorer anger recognition, and duration of cocaine use predicted both poorer anger and fear recognition. Severity of cocaine use also significantly predicted overall recognition accuracy. Copyright (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  1. Emotion recognition abilities across stimulus modalities in schizophrenia and the role of visual attention.

    Science.gov (United States)

    Simpson, Claire; Pinkham, Amy E; Kelsven, Skylar; Sasson, Noah J

    2013-12-01

    Emotion can be expressed by both the voice and face, and previous work suggests that presentation modality may impact emotion recognition performance in individuals with schizophrenia. We investigated the effect of stimulus modality on emotion recognition accuracy and the potential role of visual attention to faces in emotion recognition abilities. Thirty-one patients who met DSM-IV criteria for schizophrenia (n=8) or schizoaffective disorder (n=23) and 30 non-clinical control individuals participated. Both groups identified emotional expressions in three different conditions: audio only, visual only, combined audiovisual. In the visual only and combined conditions, time spent visually fixating salient features of the face were recorded. Patients were significantly less accurate than controls in emotion recognition during both the audio and visual only conditions but did not differ from controls on the combined condition. Analysis of visual scanning behaviors demonstrated that patients attended less than healthy individuals to the mouth in the visual condition but did not differ in visual attention to salient facial features in the combined condition, which may in part explain the absence of a deficit for patients in this condition. Collectively, these findings demonstrate that patients benefit from multimodal stimulus presentations of emotion and support hypotheses that visual attention to salient facial features may serve as a mechanism for accurate emotion identification.

  2. Positive and negative facial emotional expressions: the effect on infants' and children's facial identity recognition

    OpenAIRE

    Brenna,

    2013-01-01

    Aim of the present study was to investigate the origin and the development of the interdipendence between identity recognition and facial emotional expression processing, suggested by recent models on face processing (Calder & Young, 2005) and supported by outcomes on adults (e.g. Baudouin, Gilibert, Sansone, & Tiberghien, 2000; Schweinberger & Soukup, 1998). Particularly the effect of facial emotional expressions on infants’ and children’s ability to recognize identity of a face was explored...

  3. Impairment in the recognition of emotion across different media following traumatic brain injury.

    Science.gov (United States)

    Williams, Claire; Wood, Rodger Ll

    2010-02-01

    The current study examined emotion recognition following traumatic brain injury (TBI) and examined whether performance differed according to the affective valence and type of media presentation of the stimuli. A total of 64 patients with TBI and matched controls completed the Emotion Evaluation Test (EET) and Ekman 60 Faces Test (E-60-FT). Patients with TBI also completed measures of information processing and verbal ability. Results revealed that the TBI group were significantly impaired compared to controls when recognizing emotion on the EET and E-60-FT. A significant main effect of valence was found in both groups, with poor recognition of negative emotions. However, the difference between the recognition of positive and negative emotions was larger in the TBI group. The TBI group were also more accurate recognizing emotion displayed in audiovisual media (EET) than that displayed in still media (E-60-FT). No significant relationship was obtained between emotion recognition tasks and information-processing speed. A significant positive relationship was found between the E-60-FT and one measure of verbal ability. These findings support models of emotion that specify separate neurological pathways for certain emotions and different media and confirm that patients with TBI are vulnerable to experiencing emotion recognition difficulties.

  4. Collaborative Representation based Classification for Face Recognition

    CERN Document Server

    Zhang, Lei; Feng, Xiangchu; Ma, Yi; Zhang, David

    2012-01-01

    By coding a query sample as a sparse linear combination of all training samples and then classifying it by evaluating which class leads to the minimal coding residual, sparse representation based classification (SRC) leads to interesting results for robust face recognition. It is widely believed that the l1- norm sparsity constraint on coding coefficients plays a key role in the success of SRC, while its use of all training samples to collaboratively represent the query sample is rather ignored. In this paper we discuss how SRC works, and show that the collaborative representation mechanism used in SRC is much more crucial to its success of face classification. The SRC is a special case of collaborative representation based classification (CRC), which has various instantiations by applying different norms to the coding residual and coding coefficient. More specifically, the l1 or l2 norm characterization of coding residual is related to the robustness of CRC to outlier facial pixels, while the l1 or l2 norm c...

  5. Theory of mind and recognition of facial emotion in dementia: challenge to current concepts.

    Science.gov (United States)

    Freedman, Morris; Binns, Malcolm A; Black, Sandra E; Murphy, Cara; Stuss, Donald T

    2013-01-01

    Current literature suggests that theory of mind (ToM) and recognition of facial emotion are impaired in behavioral variant frontotemporal dementia (bvFTD). In contrast, studies suggest that ToM is spared in Alzheimer disease (AD). However, there is controversy whether recognition of emotion in faces is impaired in AD. This study challenges the concepts that ToM is preserved in AD and that recognition of facial emotion is impaired in bvFTD. ToM, recognition of facial emotion, and identification of emotions associated with video vignettes were studied in bvFTD, AD, and normal controls. ToM was assessed using false-belief and visual perspective-taking tasks. Identification of facial emotion was tested using Ekman and Friesen's pictures of facial affect. After adjusting for relevant covariates, there were significant ToM deficits in bvFTD and AD compared with controls, whereas neither group was impaired in the identification of emotions associated with video vignettes. There was borderline impairment in recognizing angry faces in bvFTD. Patients with AD showed significant deficits on false belief and visual perspective taking, and bvFTD patients were impaired on second-order false belief. We report novel findings challenging the concepts that ToM is spared in AD and that recognition of facial emotion is impaired in bvFTD.

  6. A Multidimensional Approach to the Study of Emotion Recognition in Autism Spectrum Disorders

    Directory of Open Access Journals (Sweden)

    Jean eXavier

    2015-12-01

    Full Text Available Although deficits in emotion recognition have been widely reported in Autism Spectrum Disorder (ASD, experiments have been restricted to either facial or vocal expressions. Here, we explored multimodal emotion processing in children with ASD (N=19 and with typical development (TD, N=19, considering uni (faces and voices and multimodal (faces/voices simultaneously stimuli and developmental comorbidities (neuro-visual, language and motor impairments.Compared to TD controls, children with ASD had rather high and heterogeneous emotion recognition scores but showed also several significant differences: lower emotion recognition scores for visual stimuli, for neutral emotion, and a greater number of saccades during visual task. Multivariate analyses showed that: (1 the difficulties they experienced with visual stimuli were partially alleviated multimodal stimuli. (2 Developmental age was significantly associated with emotion recognition in TD children, whereas it was the case only for the multimodal task in children with ASD. (3 Language impairments tended to be associated with emotion recognition scores of ASD children in the auditory modality. Conversely, in the visual or bimodal (visuo-auditory tasks, the impact of developmental coordination disorder or neuro-visual impairments was not found.We conclude that impaired emotion processing constitutes a dimension to explore in the field of ASD, as research has the potential to define more homogeneous subgroups and tailored interventions. However, it is clear that developmental age, the nature of the stimuli, and other developmental comorbidities must also be taken into account when studying this dimension.

  7. LSD Acutely Impairs Fear Recognition and Enhances Emotional Empathy and Sociality

    OpenAIRE

    Dolder, Patrick C.; Schmid, Yasmin; Müller, Felix; Borgwardt, Stefan; Liechti, Matthias E

    2016-01-01

    Lysergic acid diethylamide (LSD) is used recreationally and has been evaluated as an adjunct to psychotherapy to treat anxiety in patients with life-threatening illness. LSD is well-known to induce perceptual alterations, but unknown is whether LSD alters emotional processing in ways that can support psychotherapy. We investigated the acute effects of LSD on emotional processing using the Face Emotion Recognition Task (FERT) and Multifaceted Empathy Test (MET). The effects of LSD on social be...

  8. Image preprocessing study on KPCA-based face recognition

    Science.gov (United States)

    Li, Xuan; Li, Dehua

    2015-12-01

    Face recognition as an important biometric identification method, with its friendly, natural, convenient advantages, has obtained more and more attention. This paper intends to research a face recognition system including face detection, feature extraction and face recognition, mainly through researching on related theory and the key technology of various preprocessing methods in face detection process, using KPCA method, focuses on the different recognition results in different preprocessing methods. In this paper, we choose YCbCr color space for skin segmentation and choose integral projection for face location. We use erosion and dilation of the opening and closing operation and illumination compensation method to preprocess face images, and then use the face recognition method based on kernel principal component analysis method for analysis and research, and the experiments were carried out using the typical face database. The algorithms experiment on MATLAB platform. Experimental results show that integration of the kernel method based on PCA algorithm under certain conditions make the extracted features represent the original image information better for using nonlinear feature extraction method, which can obtain higher recognition rate. In the image preprocessing stage, we found that images under various operations may appear different results, so as to obtain different recognition rate in recognition stage. At the same time, in the process of the kernel principal component analysis, the value of the power of the polynomial function can affect the recognition result.

  9. Familiar Person Recognition: Is Autonoetic Consciousness More Likely to Accompany Face Recognition Than Voice Recognition?

    Science.gov (United States)

    Barsics, Catherine; Brédart, Serge

    2010-11-01

    Autonoetic consciousness is a fundamental property of human memory, enabling us to experience mental time travel, to recollect past events with a feeling of self-involvement, and to project ourselves in the future. Autonoetic consciousness is a characteristic of episodic memory. By contrast, awareness of the past associated with a mere feeling of familiarity or knowing relies on noetic consciousness, depending on semantic memory integrity. Present research was aimed at evaluating whether conscious recollection of episodic memories is more likely to occur following the recognition of a familiar face than following the recognition of a familiar voice. Recall of semantic information (biographical information) was also assessed. Previous studies that investigated the recall of biographical information following person recognition used faces and voices of famous people as stimuli. In this study, the participants were presented with personally familiar people's voices and faces, thus avoiding the presence of identity cues in the spoken extracts and allowing a stricter control of frequency exposure with both types of stimuli (voices and faces). In the present study, the rate of retrieved episodic memories, associated with autonoetic awareness, was significantly higher from familiar faces than familiar voices even though the level of overall recognition was similar for both these stimuli domains. The same pattern was observed regarding semantic information retrieval. These results and their implications for current Interactive Activation and Competition person recognition models are discussed.

  10. More Pronounced Deficits in Facial Emotion Recognition for Schizophrenia than Bipolar Disorder

    Science.gov (United States)

    Goghari, Vina M; Sponheim, Scott R

    2012-01-01

    Schizophrenia and bipolar disorder are typically separated in diagnostic systems. Behavioural, cognitive, and brain abnormalities associated with each disorder nonetheless overlap. We evaluated the diagnostic specificity of facial emotion recognition deficits in schizophrenia and bipolar disorder to determine whether select aspects of emotion recognition differed for the two disorders. The investigation used an experimental task that included the same facial images in an emotion recognition condition and an age recognition condition (to control for processes associated with general face recognition) in 27 schizophrenia patients, 16 bipolar I patients, and 30 controls. Schizophrenia and bipolar patients exhibited both shared and distinct aspects of facial emotion recognition deficits. Schizophrenia patients had deficits in recognizing angry facial expressions compared to healthy controls and bipolar patients. Compared to control participants, both schizophrenia and bipolar patients were more likely to mislabel facial expressions of anger as fear. Given that schizophrenia patients exhibited a deficit in emotion recognition for angry faces, which did not appear due to generalized perceptual and cognitive dysfunction, improving recognition of threat-related expression may be an important intervention target to improve social functioning in schizophrenia. PMID:23218816

  11. Social anhedonia is associated with neural abnormalities during face emotion processing.

    Science.gov (United States)

    Germine, Laura T; Garrido, Lucia; Bruce, Lori; Hooker, Christine

    2011-10-01

    Human beings are social organisms with an intrinsic desire to seek and participate in social interactions. Social anhedonia is a personality trait characterized by a reduced desire for social affiliation and reduced pleasure derived from interpersonal interactions. Abnormally high levels of social anhedonia prospectively predict the development of schizophrenia and contribute to poorer outcomes for schizophrenia patients. Despite the strong association between social anhedonia and schizophrenia, the neural mechanisms that underlie individual differences in social anhedonia have not been studied and are thus poorly understood. Deficits in face emotion recognition are related to poorer social outcomes in schizophrenia, and it has been suggested that face emotion recognition deficits may be a behavioral marker for schizophrenia liability. In the current study, we used functional magnetic resonance imaging (fMRI) to see whether there are differences in the brain networks underlying basic face emotion processing in a community sample of individuals low vs. high in social anhedonia. We isolated the neural mechanisms related to face emotion processing by comparing face emotion discrimination with four other baseline conditions (identity discrimination of emotional faces, identity discrimination of neutral faces, object discrimination, and pattern discrimination). Results showed a group (high/low social anhedonia) × condition (emotion discrimination/control condition) interaction in the anterior portion of the rostral medial prefrontal cortex, right superior temporal gyrus, and left somatosensory cortex. As predicted, high (relative to low) social anhedonia participants showed less neural activity in face emotion processing regions during emotion discrimination as compared to each control condition. The findings suggest that social anhedonia is associated with abnormalities in networks responsible for basic processes associated with social cognition, and provide a

  12. What’s in a Face? How Face Gender and Current Affect Influence Perceived Emotion

    Science.gov (United States)

    Harris, Daniel A.; Hayes-Skelton, Sarah A.; Ciaramitaro, Vivian M.

    2016-01-01

    Faces drive our social interactions. A vast literature suggests an interaction between gender and emotional face perception, with studies using different methodologies demonstrating that the gender of a face can affect how emotions are processed. However, how different is our perception of affective male and female faces? Furthermore, how does our current affective state when viewing faces influence our perceptual biases? We presented participants with a series of faces morphed along an emotional continuum from happy to angry. Participants judged each face morph as either happy or angry. We determined each participant’s unique emotional ‘neutral’ point, defined as the face morph judged to be perceived equally happy and angry, separately for male and female faces. We also assessed how current state affect influenced these perceptual neutral points. Our results indicate that, for both male and female participants, the emotional neutral point for male faces is perceptually biased to be happier than for female faces. This bias suggests that more happiness is required to perceive a male face as emotionally neutral, i.e., we are biased to perceive a male face as more negative. Interestingly, we also find that perceptual biases in perceiving female faces are correlated with current mood, such that positive state affect correlates with perceiving female faces as happier, while we find no significant correlation between negative state affect and the perception of facial emotion. Furthermore, we find reaction time biases, with slower responses for angry male faces compared to angry female faces. PMID:27733839

  13. Emotion processing in Parkinson's disease: a three-level study on recognition, representation, and regulation.

    Directory of Open Access Journals (Sweden)

    Ivan Enrici

    Full Text Available Parkinson's disease (PD is characterised by well-known motor symptoms, whereas the presence of cognitive non-motor symptoms, such as emotional disturbances, is still underestimated. One of the major problems in studying emotion deficits in PD is an atomising approach that does not take into account different levels of emotion elaboration. Our study addressed the question of whether people with PD exhibit difficulties in one or more specific dimensions of emotion processing, investigating three different levels of analyses, that is, recognition, representation, and regulation.Thirty-two consecutive medicated patients with PD and 25 healthy controls were enrolled in the study. Participants performed a three-level analysis assessment of emotional processing using quantitative standardised emotional tasks: the Ekman 60-Faces for emotion recognition, the full 36-item version of the Reading the Mind in the Eyes (RME for emotion representation, and the 20-item Toronto Alexithymia Scale (TAS-20 for emotion regulation.Regarding emotion recognition, patients obtained significantly worse scores than controls in the total score of Ekman 60-Faces but not in any other basic emotions. For emotion representation, patients obtained significantly worse scores than controls in the RME experimental score but no in the RME gender control task. Finally, on emotion regulation, PD and controls did not perform differently at TAS-20 and no specific differences were found on TAS-20 subscales. The PD impairments on emotion recognition and representation do not correlate with dopamine therapy, disease severity, or with the duration of illness. These results are independent from other cognitive processes, such as global cognitive status and executive function, or from psychiatric status, such as depression, anxiety or apathy.These results may contribute to better understanding of the emotional problems that are often seen in patients with PD and the measures used to test

  14. Emotion processing in Parkinson's disease: a three-level study on recognition, representation, and regulation.

    Science.gov (United States)

    Enrici, Ivan; Adenzato, Mauro; Ardito, Rita B; Mitkova, Antonia; Cavallo, Marco; Zibetti, Maurizio; Lopiano, Leonardo; Castelli, Lorys

    2015-01-01

    Parkinson's disease (PD) is characterised by well-known motor symptoms, whereas the presence of cognitive non-motor symptoms, such as emotional disturbances, is still underestimated. One of the major problems in studying emotion deficits in PD is an atomising approach that does not take into account different levels of emotion elaboration. Our study addressed the question of whether people with PD exhibit difficulties in one or more specific dimensions of emotion processing, investigating three different levels of analyses, that is, recognition, representation, and regulation. Thirty-two consecutive medicated patients with PD and 25 healthy controls were enrolled in the study. Participants performed a three-level analysis assessment of emotional processing using quantitative standardised emotional tasks: the Ekman 60-Faces for emotion recognition, the full 36-item version of the Reading the Mind in the Eyes (RME) for emotion representation, and the 20-item Toronto Alexithymia Scale (TAS-20) for emotion regulation. Regarding emotion recognition, patients obtained significantly worse scores than controls in the total score of Ekman 60-Faces but not in any other basic emotions. For emotion representation, patients obtained significantly worse scores than controls in the RME experimental score but no in the RME gender control task. Finally, on emotion regulation, PD and controls did not perform differently at TAS-20 and no specific differences were found on TAS-20 subscales. The PD impairments on emotion recognition and representation do not correlate with dopamine therapy, disease severity, or with the duration of illness. These results are independent from other cognitive processes, such as global cognitive status and executive function, or from psychiatric status, such as depression, anxiety or apathy. These results may contribute to better understanding of the emotional problems that are often seen in patients with PD and the measures used to test these problems

  15. Multi—pose Color Face Recognition in a Complex Background

    Institute of Scientific and Technical Information of China (English)

    ZHUChangren; WANGRunsheng

    2003-01-01

    Face recognition has wider application fields. In recurrent references, most of the algorithms that deal with the face recognition in the static images are with simple background, and only used for ID picture recogni-tion. It is necessary to study the whole process of multi-pose face recognition in a clutter background. In this pa-per an automatic multi-pose face recognition system with multi-feature is proposed. It consists of several steps: face detection, detection and location of the face organs, feature extraction for recognition, recognition decision. In face de-tection the combination of skin-color and multi-verification which consists of the analysis of the shape, local organ fea-tures and head model is applied to improve the perfor-mance. In detection and location of the face organ feature points, with the analysis of multiple features and their pro-jections, the combination of an iterative search with a con-fidence function and template matching at the candidate points is adopted to improve the performance of accuracy and speed. In feature extraction for recognition, geome-try normalization based on three-point afflne transform is adopted to conserve the information to a maximum con-tent before the feature extraction of principal component analysis (PCA). In recognition decision, a hierarchical face model with the division of the face poses is introduced to reduce its retrieval space and thus to cut its time consump-tion. In addition, a fusion decision is applied to improve the face recognition performance. Also, pose recognition result can be got simultaneously. The new approach is ap-plied to 420 color images which consist of multi-pose faces with two visible eyes in a complex background, and the results are satisfactory.

  16. Face age and sex modulate the other-race effect in face recognition.

    Science.gov (United States)

    Wallis, Jennifer; Lipp, Ottmar V; Vanman, Eric J

    2012-11-01

    Faces convey a variety of socially relevant cues that have been shown to affect recognition, such as age, sex, and race, but few studies have examined the interactive effect of these cues. White participants of two distinct age groups were presented with faces that differed in race, age, and sex in a face recognition paradigm. Replicating the other-race effect, young participants recognized young own-race faces better than young other-race faces. However, recognition performance did not differ across old faces of different races (Experiments 1, 2A). In addition, participants showed an other-age effect, recognizing White young faces better than White old faces. Sex affected recognition performance only when age was not varied (Experiment 2B). Overall, older participants showed a similar recognition pattern (Experiment 3) as young participants, displaying an other-race effect for young, but not old, faces. However, they recognized young and old White faces on a similar level. These findings indicate that face cues interact to affect recognition performance such that age and sex information reliably modulate the effect of race cues. These results extend accounts of face recognition that explain recognition biases (such as the other-race effect) as a function of dichotomous ingroup/outgroup categorization, in that outgroup characteristics are not simply additive but interactively determine recognition performance.

  17. A Survey of 2D Face Recognition Techniques

    Directory of Open Access Journals (Sweden)

    Mejda Chihaoui

    2016-09-01

    Full Text Available Despite the existence of various biometric techniques, like fingerprints, iris scan, as well as hand geometry, the most efficient and more widely-used one is face recognition. This is because it is inexpensive, non-intrusive and natural. Therefore, researchers have developed dozens of face recognition techniques over the last few years. These techniques can generally be divided into three categories, based on the face data processing methodology. There are methods that use the entire face as input data for the proposed recognition system, methods that do not consider the whole face, but only some features or areas of the face and methods that use global and local face characteristics simultaneously. In this paper, we present an overview of some well-known methods in each of these categories. First, we expose the benefits of, as well as the challenges to the use of face recognition as a biometric tool. Then, we present a detailed survey of the well-known methods by expressing each method’s principle. After that, a comparison between the three categories of face recognition techniques is provided. Furthermore, the databases used in face recognition are mentioned, and some results of the applications of these methods on face recognition databases are presented. Finally, we highlight some new promising research directions that have recently appeared.

  18. Interpersonal self-support and attentional disengagement from emotional faces.

    Science.gov (United States)

    Xia, Ling-Xiang; Shi, Xu-Liang; Zhang, Ran-Ran; Hollon, Steven D

    2015-01-08

    Prior studies have shown that interpersonal self-support is related to emotional symptoms. The present study explored the relationship between interpersonal self-support and attentional disengagement from emotional faces. A spatial cueing task was administrated to 21 high and 24 low interpersonal self-support Chinese undergraduate students to assess difficulty in shifting away from emotional faces. The Sidak corrected multiple pairwise tests revealed that the low interpersonal self-support group had greater response latencies on negative faces than neutral faces or positive faces in the invalid cues condition, F(2, 41) = 5.68, p interpersonal self-support group responded more slowly than the high interpersonal self-support group to negative faces, F(1, 42) = 7.63, p interpersonal self-support is related to difficulty disengaging from negative emotional information and suggest that interpersonal self-support may refer to emotional dispositions, especially negative emotional dispositions.

  19. Recognition of emotional expressions in blended faces and gender discrimination by children with autism%孤独症儿童混合面部表情识别及面孔性别区分能力研究

    Institute of Scientific and Technical Information of China (English)

    闫瑾; 姜志梅; 郭岚敏; 吕洋; 孙奇峰; 李兴洲; 王立苹

    2012-01-01

    [Objective] To test the ability of the recognition of emotional expression in blended faces and gender discrimination of eyes and mouths by children with autism. [Methods] Thirty-two male children with autism and thirty-two typically developing children matched on developmental age and gender were selected. They were tested with the Emotional Expressions Recognition Software System developed in this research which took recognition accuracy rate and response time in different presentation manners as analysis indexes. [Results] 1)The accuracy rates of emotional expression were significantly lower in children with autism than in typically developing[(58. 0 ± 15. 6)%vs(78. 4±13. 5)%,i=- 5. i,P = 0. 000],the response time was delayed[(9 948. 3 ± 3 116. 2)ms vs(5 617.0±1 362. 9)ms,t=4. 7,P = 0. 000]. 2)The accuracy rates of gender discrimination was significantly lower in children with autism than in typically developing[eye: (76. 7 ± 11. 5)%vs(86. 6 ±10. 9)%,mouth: (66. 2 ± 12. 8)%vs(73. 1 ±10. 7)%], the response time was delayed[eye: (4 138.7 ± 542. 0)ms vs(2 721. 9±636. 6)ms,mouth:(3 807. 8 ± 710. Dms vs(2 836. 5 ± 619. 9)ms). [Conclusions] Children with autism are inclined to attend to the lower face when making judgments about emotional expressions; they can use information from eyes for gender discrimination,and do not appear to be superior to typically developing children at using mouth information to process gender information.%[目的] 测试孤独症儿童混合面部表情的识别能力及通过眼和嘴对面孔性别进行区分的能力. [方法] 采用自制计算机系统对32例孤独症儿童和32例正常儿童进行测试并分析,以正确率、反应时和错误类型为分析指标,两组儿童在发展年龄上进行匹配. [结果] 1)孤独症组儿童识别混合面部表情的平均正确率小于正常对照组[(58.0±15.6)%和(78.4±13.5)%,t=-5.4,P=0.000],平均反应时长于正常对照组[(9 948.3±3 116.2) ms和(5 617.0

  20. PCA Based Rapid and Real Time Face Recognition Technique

    Directory of Open Access Journals (Sweden)

    T R Chandrashekar

    2013-12-01

    Full Text Available Economical and efficient that is used in various applications is face Biometric which has been a popular form biometric system. Face recognition system is being a topic of research for last few decades. Several techniques are proposed to improve the performance of face recognition system. Accuracy is tested against intensity, distance from camera, and pose variance. Multiple face recognition is another subtopic which is under research now a day. Speed at which the technique works is a parameter under consideration to evaluate a technique. As an example a support vector machine performs really well for face recognition but the computational efficiency degrades significantly with increase in number of classes. Eigen Face technique produces quality features for face recognition but the accuracy is proved to be comparatively less to many other techniques. With increase in use of core processors in personal computers and application demanding speed in processing and multiple face detection and recognition system (for example an entry detection system in shopping mall or an industry, demand for such systems are cumulative as there is a need for automated systems worldwide. In this paper we propose a novel system of face recognition developed with C# .Net that can detect multiple faces and can recognize the faces parallel by utilizing the system resources and the core processors. The system is built around Haar Cascade based face detection and PCA based face recognition system with C#.Net. Parallel library designed for .Net is used to aide to high speed detection and recognition of the real time faces. Analysis of the performance of the proposed technique with some of the conventional techniques reveals that the proposed technique is not only accurate, but also is fast in comparison to other techniques.

  1. Newborns' Face Recognition: Role of Inner and Outer Facial Features

    Science.gov (United States)

    Turati, Chiara; Macchi Cassia, Viola; Simion, Francesca; Leo, Irene

    2006-01-01

    Existing data indicate that newborns are able to recognize individual faces, but little is known about what perceptual cues drive this ability. The current study showed that either the inner or outer features of the face can act as sufficient cues for newborns' face recognition (Experiment 1), but the outer part of the face enjoys an advantage…

  2. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    Science.gov (United States)

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…

  3. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    Science.gov (United States)

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…

  4. Emotional recognition in depressed epilepsy patients.

    Science.gov (United States)

    Brand, Jesse G; Burton, Leslie A; Schaffer, Sarah G; Alper, Kenneth R; Devinsky, Orrin; Barr, William B

    2009-07-01

    The current study examined the relationship between emotional recognition and depression using the Minnesota Multiphasic Personality Inventory, Second Edition (MMPI-2), in a population with epilepsy. Participants were a mixture of surgical candidates in addition to those receiving neuropsychological testing as part of a comprehensive evaluation. Results suggested that patients with epilepsy reporting increased levels of depression (Scale D) performed better than those patients reporting low levels of depression on an index of simple facial recognition, and depression was associated with poor prosody discrimination. Further, it is notable that more than half of the present sample had significantly elevated Scale D scores. The potential effects of a mood-congruent bias and implications for social functioning in depressed patients with epilepsy are discussed.

  5. How Aging Affects the Recognition of Emotional Speech

    Science.gov (United States)

    Paulmann, Silke; Pell, Marc D.; Kotz, Sonja A.

    2008-01-01

    To successfully infer a speaker's emotional state, diverse sources of emotional information need to be decoded. The present study explored to what extent emotional speech recognition of "basic" emotions (anger, disgust, fear, happiness, pleasant surprise, sadness) differs between different sex (male/female) and age (young/middle-aged) groups in a…

  6. The structural neuroanatomy of music emotion recognition: evidence from frontotemporal lobar degeneration.

    Science.gov (United States)

    Omar, Rohani; Henley, Susie M D; Bartlett, Jonathan W; Hailstone, Julia C; Gordon, Elizabeth; Sauter, Disa A; Frost, Chris; Scott, Sophie K; Warren, Jason D

    2011-06-01

    Despite growing clinical and neurobiological interest in the brain mechanisms that process emotion in music, these mechanisms remain incompletely understood. Patients with frontotemporal lobar degeneration (FTLD) frequently exhibit clinical syndromes that illustrate the effects of breakdown in emotional and social functioning. Here we investigated the neuroanatomical substrate for recognition of musical emotion in a cohort of 26 patients with FTLD (16 with behavioural variant frontotemporal dementia, bvFTD, 10 with semantic dementia, SemD) using voxel-based morphometry. On neuropsychological evaluation, patients with FTLD showed deficient recognition of canonical emotions (happiness, sadness, anger and fear) from music as well as faces and voices compared with healthy control subjects. Impaired recognition of emotions from music was specifically associated with grey matter loss in a distributed cerebral network including insula, orbitofrontal cortex, anterior cingulate and medial prefrontal cortex, anterior temporal and more posterior temporal and parietal cortices, amygdala and the subcortical mesolimbic system. This network constitutes an essential brain substrate for recognition of musical emotion that overlaps with brain regions previously implicated in coding emotional value, behavioural context, conceptual knowledge and theory of mind. Musical emotion recognition may probe the interface of these processes, delineating a profile of brain damage that is essential for the abstraction of complex social emotions.

  7. The Effects of Anxiety on the Recognition of Multisensory Emotional Cues with Different Cultural Familiarity

    Directory of Open Access Journals (Sweden)

    Ai Koizumi

    2011-10-01

    Full Text Available Anxious individuals have been shown to interpret others' facial expressions negatively. However, whether this negative interpretation bias depends on the modality and familiarity of emotional cues remains largely unknown. We examined whether trait-anxiety affects recognition of multisensory emotional cues (ie, face and voice, which were expressed by actors from either the same or different cultural background as the participants (ie, familiar in-group and unfamiliar out-group. The dynamic face and voice cues of the same actors were synchronized, and conveyed either congruent (eg, happy face and voice or incongruent emotions (eg, happy face and angry voice. Participants were to indicate the perceived emotion in one of the cues, while ignoring the other. The results showed that when recognizing emotions of in-group actors, highly anxious individuals, compared with low anxious ones, were more likely to interpret others' emotions in a negative manner, putting more weight on the to-be-ignored angry cues. This interpretation bias was found regardless of the cue modality. However, when recognizing emotions of out-group actors, low and high anxious individuals showed no difference in the interpretation of emotions irrespective of modality. These results suggest that trait-anxiety affects recognition of emotional expressions in a modality independent yet cultural familiarity dependent manner.

  8. Examplers based image fusion features for face recognition

    CERN Document Server

    James, Alex Pappachen

    2012-01-01

    Examplers of a face are formed from multiple gallery images of a person and are used in the process of classification of a test image. We incorporate such examplers in forming a biologically inspired local binary decisions on similarity based face recognition method. As opposed to single model approaches such as face averages the exampler based approach results in higher recognition accu- racies and stability. Using multiple training samples per person, the method shows the following recognition accuracies: 99.0% on AR, 99.5% on FERET, 99.5% on ORL, 99.3% on EYALE, 100.0% on YALE and 100.0% on CALTECH face databases. In addition to face recognition, the method also detects the natural variability in the face images which can find application in automatic tagging of face images.

  9. Isolating the Special Component of Face Recognition: Peripheral Identification and a Mooney Face

    Science.gov (United States)

    McKone, Elinor

    2004-01-01

    A previous finding argues that, for faces, configural (holistic) processing can operate even in the complete absence of part-based contributions to recognition. Here, this result is confirmed using 2 methods. In both, recognition of inverted faces (parts only) was removed altogether (chance identification of faces in the periphery; no perception…

  10. Familiar Face Recognition in Children with Autism: The Differential Use of Inner and Outer Face Parts

    Science.gov (United States)

    Wilson, Rebecca; Pascalis, Olivier; Blades, Mark

    2007-01-01

    We investigated whether children with autistic spectrum disorders (ASD) have a deficit in recognising familiar faces. Children with ASD were given a forced choice familiar face recognition task with three conditions: full faces, inner face parts and outer face parts. Control groups were children with developmental delay (DD) and typically…

  11. Dopamine and light: effects on facial emotion recognition.

    Science.gov (United States)

    Cawley, Elizabeth; Tippler, Maria; Coupland, Nicholas J; Benkelfat, Chawki; Boivin, Diane B; Aan Het Rot, Marije; Leyton, Marco

    2017-06-01

    Bright light can affect mood states and social behaviours. Here, we tested potential interacting effects of light and dopamine on facial emotion recognition. Participants were 32 women with subsyndromal seasonal affective disorder tested in either a bright (3000 lux) or dim light (10 lux) environment. Each participant completed two test days, one following the ingestion of a phenylalanine/tyrosine-deficient mixture and one with a nutritionally balanced control mixture, both administered double blind in a randomised order. Approximately four hours post-ingestion participants completed a self-report measure of mood followed by a facial emotion recognition task. All testing took place between November and March when seasonal symptoms would be present. Following acute phenylalanine/tyrosine depletion (APTD), compared to the nutritionally balanced control mixture, participants in the dim light condition were more accurate at recognising sad faces, less likely to misclassify them, and faster at responding to them, effects that were independent of changes in mood. Effects of APTD on responses to sad faces in the bright light group were less consistent. There were no APTD effects on responses to other emotions, with one exception: a significant light × mixture interaction was seen for the reaction time to fear, but the pattern of effect was not predicted a priori or seen on other measures. Together, the results suggest that the processing of sad emotional stimuli might be greater when dopamine transmission is low. Bright light exposure, used for the treatment of both seasonal and non-seasonal mood disorders, might produce some of its benefits by preventing this effect.

  12. Emotion recognition and social skills in child and adolescent offspring of parents with schizophrenia.

    Science.gov (United States)

    Horton, Leslie E; Bridgwater, Miranda A; Haas, Gretchen L

    2017-05-01

    Emotion recognition, a social cognition domain, is impaired in people with schizophrenia and contributes to social dysfunction. Whether impaired emotion recognition emerges as a manifestation of illness or predates symptoms is unclear. Findings from studies of emotion recognition impairments in first-degree relatives of people with schizophrenia are mixed and, to our knowledge, no studies have investigated the link between emotion recognition and social functioning in that population. This study examined facial affect recognition and social skills in 16 offspring of parents with schizophrenia (familial high-risk/FHR) compared to 34 age- and sex-matched healthy controls (HC), ages 7-19. As hypothesised, FHR children exhibited impaired overall accuracy, accuracy in identifying fearful faces, and overall recognition speed relative to controls. Age-adjusted facial affect recognition accuracy scores predicted parent's overall rating of their child's social skills for both groups. This study supports the presence of facial affect recognition deficits in FHR children. Importantly, as the first known study to suggest the presence of these deficits in young, asymptomatic FHR children, it extends findings to a developmental stage predating symptoms. Further, findings point to a relationship between early emotion recognition and social skills. Improved characterisation of deficits in FHR children could inform early intervention.

  13. Recognition of Moving and Static Faces by Young Infants

    Science.gov (United States)

    Otsuka, Yumiko; Konishi, Yukuo; Kanazawa, So; Yamaguchi, Masami K.; Abdi, Herve; O'Toole, Alice J.

    2009-01-01

    This study compared 3- to 4-month-olds' recognition of previously unfamiliar faces learned in a moving or a static condition. Infants in the moving condition showed successful recognition with only 30 s familiarization, even when different images of a face were used in the familiarization and test phase (Experiment 1). In contrast, infants in the…

  14. Transfer between Pose and Illumination Training in Face Recognition

    Science.gov (United States)

    Liu, Chang Hong; Bhuiyan, Md. Al-Amin; Ward, James; Sui, Jie

    2009-01-01

    The relationship between pose and illumination learning in face recognition was examined in a yes-no recognition paradigm. The authors assessed whether pose training can transfer to a new illumination or vice versa. Results show that an extensive level of pose training through a face-name association task was able to generalize to a new…

  15. Recognition of human face based on improved multi-sample

    Institute of Scientific and Technical Information of China (English)

    LIU Xia; LI Lei-lei; LI Ting-jun; LIU Lu; ZHANG Ying

    2009-01-01

    In order to solve the problem caused by variation illumination in human face recognition, we bring forward a face recognition algorithm based on the improved muhi-sample. In this algorithm, the face image is processed with Retinex theory, meanwhile, the Gabor filter is adopted to perform the feature extraction. The experimental results show that the application of Retinex theory improves the recognition accuracy, and makes the algorithm more robust to the variation illumination. The Gabor filter is more effective and accurate for extracting more useable facial local features. It is proved that the proposed algorithm has good recognition accuracy and it is stable under variation illumination.

  16. An infrared human face recognition method based on 2DPCA

    Institute of Scientific and Technical Information of China (English)

    LIU Xia; Li Ting-jun

    2009-01-01

    Aimed at the problems of infrared image recognition under varying illumination, face disguise, etc. ,we bring out an infrared human face recognition algorithm based on 2DPCA. The proposed algorithm can work out the covariance matrix of the training sample easily and directly; at the same time, it costs less time to work out the eigenvector. Relevant experiments are carried out, and the result indicates that compared with the traditional recognition algorithm, the proposed recognition method is swift and has a good adaptability to the changes of human face posture.

  17. When the face fits: recognition of celebrities from matching and mismatching faces and voices.

    Science.gov (United States)

    Stevenage, Sarah V; Neil, Greg J; Hamlin, Iain

    2014-01-01

    The results of two experiments are presented in which participants engaged in a face-recognition or a voice-recognition task. The stimuli were face-voice pairs in which the face and voice were co-presented and were either "matched" (same person), "related" (two highly associated people), or "mismatched" (two unrelated people). Analysis in both experiments confirmed that accuracy and confidence in face recognition was consistently high regardless of the identity of the accompanying voice. However accuracy of voice recognition was increasingly affected as the relationship between voice and accompanying face declined. Moreover, when considering self-reported confidence in voice recognition, confidence remained high for correct responses despite the proportion of these responses declining across conditions. These results converged with existing evidence indicating the vulnerability of voice recognition as a relatively weak signaller of identity, and results are discussed in the context of a person-recognition framework.

  18. Facial emotion recognition in psychiatrists and influences of their therapeutic identification on that ability.

    Science.gov (United States)

    Dalkıran, Mihriban; Gultekin, Gozde; Yuksek, Erhan; Varsak, Nalan; Gul, Hesna; Kıncır, Zeliha; Tasdemir, Akif; Emul, Murat

    2016-08-01

    Although emotional cues like facial emotion expressions seem to be important in social interaction, there is no specific training about emotional cues for psychiatrists. Here, we aimed to investigate psychiatrists' ability of facial emotion recognition and relation with their clinical identification as psychotherapy-psychopharmacology oriented or being adult and childhood-adolescent psychiatrist. Facial Emotion Recognition Test was performed to 130 psychiatrists that were constructed by a set of photographs (happy, sad, fearful, angry, surprised, disgusted and neutral faces) from Ekman and Friesen's. Psychotherapy oriented adult psychiatrists were significantly better in recognizing sad facial emotion (p=.003) than psychopharmacologists while no significant differences were detected according to therapeutic orientation among child-adolescent psychiatrists (for each, p>.05). Adult psychiatrists were significantly better in recognizing fearful (p=.012) and disgusted (p=.003) facial emotions than child-adolescent psychiatrists while the latter were better in recognizing angry facial emotion (p=.008). For the first time, we have shown some differences on psychiatrists' facial emotion recognition ability according to therapeutic identification and being adult or child-adolescent psychiatrist. It would be valuable to investigate how these differences or training the ability of facial emotion recognition would affect the quality of patient-clinician interaction and treatment related outcomes. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. A Review on Feature Extraction Techniques in Face Recognition

    Directory of Open Access Journals (Sweden)

    Rahimeh Rouhi

    2013-01-01

    Full Text Available Face recognition systems due to their significant application in the security scopes, have been of greatimportance in recent years. The existence of an exact balance between the computing cost, robustness andtheir ability for face recognition is an important characteristic for such systems. Besides, trying to designthe systems performing under different conditions (e.g. illumination, variation of pose, different expressionand etc. is a challenging problem in the feature extraction of the face recognition. As feature extraction isan important step in the face recognition operation, in the present study four techniques of featureextraction in the face recognition were reviewed, subsequently comparable results were presented, andthen the advantages and the disadvantages of these methods were discussed.

  20. Local Feature Learning for Face Recognition under Varying Poses

    DEFF Research Database (Denmark)

    Duan, Xiaodong; Tan, Zheng-Hua

    2015-01-01

    In this paper, we present a local feature learning method for face recognition to deal with varying poses. As opposed to the commonly used approaches of recovering frontal face images from profile views, the proposed method extracts the subject related part from a local feature by removing the pose...... related part in it on the basis of a pose feature. The method has a closed-form solution, hence being time efficient. For performance evaluation, cross pose face recognition experiments are conducted on two public face recognition databases FERET and FEI. The proposed method shows a significant...... recognition improvement under varying poses over general local feature approaches and outperforms or is comparable with related state-of-the-art pose invariant face recognition approaches. Copyright ©2015 by IEEE....

  1. A Real-Time Face Recognition System Using Eigenfaces

    Directory of Open Access Journals (Sweden)

    Daniel Georgescu

    2011-12-01

    Full Text Available A real-time system for recognizing faces in a video stream provided by a surveillance camera was implemented, having real-time face detection. Thus, both face detection and face recognition techniques are summary presented, without skipping the important technical aspects. The proposed approach essentially was to implement and verify the algorithm Eigenfaces for Recognition, which solves the recognition problem for two dimensional representations of faces, using the principal component analysis. The snapshots, representing input images for the proposed system, are projected in to a face space (feature space which best defines the variation for the face images training set. The face space is defined by the ‘eigenfaces’ which are the eigenvectors of the set of faces. These eigenfaces contribute in face reconstruction of a new face image projected onto face space with a meaningful (named weight.The projection of the new image in this feature space is then compared to the available projections of training set to identify the person using the Euclidian distance.  The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions.

  2. Unaware person recognition from the body when face identification fails.

    Science.gov (United States)

    Rice, Allyson; Phillips, P Jonathon; Natu, Vaidehi; An, Xiaobo; O'Toole, Alice J

    2013-11-01

    How does one recognize a person when face identification fails? Here, we show that people rely on the body but are unaware of doing so. State-of-the-art face-recognition algorithms were used to select images of people with almost no useful identity information in the face. Recognition of the face alone in these cases was near chance level, but recognition of the person was accurate. Accuracy in identifying the person without the face was identical to that in identifying the whole person. Paradoxically, people reported relying heavily on facial features over noninternal face and body features in making their identity decisions. Eye movements indicated otherwise, with gaze duration and fixations shifting adaptively toward the body and away from the face when the body was a better indicator of identity than the face. This shift occurred with no cost to accuracy or response time. Human identity processing may be partially inaccessible to conscious awareness.

  3. Dogs can discriminate emotional expressions of human faces.

    Science.gov (United States)

    Müller, Corsin A; Schmitt, Kira; Barber, Anjuli L A; Huber, Ludwig

    2015-03-01

    The question of whether animals have emotions and respond to the emotional expressions of others has become a focus of research in the last decade [1-9]. However, to date, no study has convincingly shown that animals discriminate between emotional expressions of heterospecifics, excluding the possibility that they respond to simple cues. Here, we show that dogs use the emotion of a heterospecific as a discriminative cue. After learning to discriminate between happy and angry human faces in 15 picture pairs, whereby for one group only the upper halves of the faces were shown and for the other group only the lower halves of the faces were shown, dogs were tested with four types of probe trials: (1) the same half of the faces as in the training but of novel faces, (2) the other half of the faces used in training, (3) the other half of novel faces, and (4) the left half of the faces used in training. We found that dogs for which the happy faces were rewarded learned the discrimination more quickly than dogs for which the angry faces were rewarded. This would be predicted if the dogs recognized an angry face as an aversive stimulus. Furthermore, the dogs performed significantly above chance level in all four probe conditions and thus transferred the training contingency to novel stimuli that shared with the training set only the emotional expression as a distinguishing feature. We conclude that the dogs used their memories of real emotional human faces to accomplish the discrimination task.

  4. Parents’ Beliefs about Emotions and Children’s Recognition of Parents’ Emotions

    OpenAIRE

    Dunsmore, Julie C.; Her, Pa; Halberstadt, Amy G.; Perez-Rivera, Marie B.

    2009-01-01

    This study investigated parents’ emotion-related beliefs, experience, and expression, and children’s recognition of their parents’ emotions with 40 parent-child dyads. Parents reported beliefs about danger and guidance of children’s emotions. While viewing emotion-eliciting film clips, parents self-reported their emotional experience and masking of emotion. Children and observers rated videos of parents watching emotion-eliciting film clips. Fathers reported more masking than mothers and thei...

  5. A new method for face detection in colour images for emotional bio-robots

    Institute of Scientific and Technical Information of China (English)

    HAPESHI; Kevin

    2010-01-01

    Emotional bio-robots have become a hot research topic in last two decades. Though there have been some progress in research, design and development of various emotional bio-robots, few of them can be used in practical applications. The study of emotional bio-robots demands multi-disciplinary co-operation. It involves computer science, artificial intelligence, 3D computation, engineering system modelling, analysis and simulation, bionics engineering, automatic control, image processing and pattern recognition etc. Among them, face detection belongs to image processing and pattern recognition. An emotional robot must have the ability to recognize various objects, particularly, it is very important for a bio-robot to be able to recognize human faces from an image. In this paper, a face detection method is proposed for identifying any human faces in colour images using human skin model and eye detection method. Firstly, this method can be used to detect skin regions from the input colour image after normalizing its luminance. Then, all face candidates are identified using an eye detection method. Comparing with existing algorithms, this method only relies on the colour and geometrical data of human face rather than using training datasets. From experimental results, it is shown that this method is effective and fast and it can be applied to the development of an emotional bio-robot with further improvements of its speed and accuracy.

  6. Face averages enhance user recognition for smartphone security.

    Science.gov (United States)

    Robertson, David J; Kramer, Robin S S; Burton, A Mike

    2015-01-01

    Our recognition of familiar faces is excellent, and generalises across viewing conditions. However, unfamiliar face recognition is much poorer. For this reason, automatic face recognition systems might benefit from incorporating the advantages of familiarity. Here we put this to the test using the face verification system available on a popular smartphone (the Samsung Galaxy). In two experiments we tested the recognition performance of the smartphone when it was encoded with an individual's 'face-average'--a representation derived from theories of human face perception. This technique significantly improved performance for both unconstrained celebrity images (Experiment 1) and for real faces (Experiment 2): users could unlock their phones more reliably when the device stored an average of the user's face than when they stored a single image. This advantage was consistent across a wide variety of everyday viewing conditions. Furthermore, the benefit did not reduce the rejection of imposter faces. This benefit is brought about solely by consideration of suitable representations for automatic face recognition, and we argue that this is just as important as development of matching algorithms themselves. We propose that this representation could significantly improve recognition rates in everyday settings.

  7. Pose-Invariant Face Recognition via RGB-D Images

    Directory of Open Access Journals (Sweden)

    Gaoli Sang

    2016-01-01

    Full Text Available Three-dimensional (3D face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions.

  8. PARTIAL MATCHING FACE RECOGNITION METHOD FOR REHABILITATION NURSING ROBOTS BEDS

    Directory of Open Access Journals (Sweden)

    Dongmei LIANG

    2015-06-01

    Full Text Available In order to establish face recognition system in rehabilitation nursing robots beds and achieve real-time monitor the patient on the bed. We propose a face recognition method based on partial matching Hu moments which apply for rehabilitation nursing robots beds. Firstly we using Haar classifier to detect human faces automatically in dynamic video frames. Secondly we using Otsu threshold method to extract facial features (eyebrows, eyes, mouth in the face image and its Hu moments. Finally, we using Hu moment feature set to achieve the automatic face recognition. Experimental results show that this method can efficiently identify face in a dynamic video and it has high practical value (the accuracy rate is 91% and the average recognition time is 4.3s.

  9. Automated facial coding: validation of basic emotions and FACS AUs in FaceReader

    NARCIS (Netherlands)

    P. Lewinski; T.M. den Uyl; C. Butler

    2014-01-01

    In this study, we validated automated facial coding (AFC) software—FaceReader (Noldus, 2014)—on 2 publicly available and objective datasets of human expressions of basic emotions. We present the matching scores (accuracy) for recognition of facial expressions and the Facial Action Coding System (FAC

  10. Graph Laplace for occluded face completion and recognition.

    Science.gov (United States)

    Deng, Yue; Dai, Qionghai; Zhang, Zengke

    2011-08-01

    This paper proposes a spectral-graph-based algorithm for face image repairing, which can improve the recognition performance on occluded faces. The face completion algorithm proposed in this paper includes three main procedures: 1) sparse representation for partially occluded face classification; 2) image-based data mining; and 3) graph Laplace (GL) for face image completion. The novel part of the proposed framework is GL, as named from graphical models and the Laplace equation, and can achieve a high-quality repairing of damaged or occluded faces. The relationship between the GL and the traditional Poisson equation is proven. We apply our face repairing algorithm to produce completed faces, and use face recognition to evaluate the performance of the algorithm. Experimental results verify the effectiveness of the GL method for occluded face completion.

  11. Older Adults' Trait Impressions of Faces Are Sensitive to Subtle Resemblance to Emotions.

    Science.gov (United States)

    Franklin, Robert G; Zebrowitz, Leslie A

    2013-09-01

    Younger adults (YA) attribute emotion-related traits to people whose neutral facial structure resembles an emotion (emotion overgeneralization). The fact that older adults (OA) show deficits in accurately labeling basic emotions suggests that they may be relatively insensitive to variations in the emotion resemblance of neutral expression faces that underlie emotion overgeneralization effects. On the other hand, the fact that OA, like YA, show a 'pop-out' effect for anger, more quickly locating an angry than a happy face in a neutral array, suggests that both age groups may be equally sensitive to emotion resemblance. We used computer modeling to assess the degree to which neutral faces objectively resembled emotions and assessed whether that resemblance predicted trait impressions. We found that both OA and YA showed anger and surprise overgeneralization in ratings of danger and naiveté, respectively, with no significant differences in the strength of the effects for the two age groups. These findings suggest that well-documented OA deficits on emotion recognition tasks may be more due to processing demands than to an insensitivity to the social affordances of emotion expressions.

  12. Impaired recognition of prosody and subtle emotional facial expressions in Parkinson's disease.

    Science.gov (United States)

    Buxton, Sharon L; MacDonald, Lorraine; Tippett, Lynette J

    2013-04-01

    Accurately recognizing the emotional states of others is crucial for successful social interactions and social relationships. Individuals with Parkinson's disease (PD) have shown deficits in emotional recognition abilities although findings have been inconsistent. This study examined recognition of emotions from prosody and from facial emotional expressions with three levels of subtlety, in 30 individuals with PD (without dementia) and 30 control participants. The PD group were impaired on the prosody task, with no differential impairments in specific emotions. PD participants were also impaired at recognizing facial expressions of emotion, with a significant association between how well they could recognize emotions in the two modalities, even after controlling for disease severity. When recognizing facial expressions, the PD group had no difficulty identifying prototypical Ekman and Friesen (1976) emotional faces, but were poorer than controls at recognizing the moderate and difficult levels of subtle expressions. They were differentially impaired at recognizing moderately subtle expressions of disgust and sad expressions at the difficult level. Notably, however, they were impaired at recognizing happy expressions at both levels of subtlety. Furthermore how well PD participants identified happy expressions conveyed by either face or voice was strongly related to accuracy in the other modality. This suggests dysfunction of overlapping components of the circuitry processing happy expressions in PD. This study demonstrates the usefulness of including subtle expressions of emotion, likely to be encountered in everyday life, when assessing recognition of facial expressions.

  13. Eye spy: the predictive value of fixation patterns in detecting subtle and extreme emotions from faces.

    Science.gov (United States)

    Vaidya, Avinash R; Jin, Chenshuo; Fellows, Lesley K

    2014-11-01

    Successful social interaction requires recognizing subtle changes in the mental states of others. Deficits in emotion recognition are found in several neurological and psychiatric illnesses, and are often marked by disturbances in gaze patterns to faces, typically interpreted as a failure to fixate on emotionally informative facial features. However, there has been very little research on how fixations inform emotion recognition in healthy people. Here, we asked whether fixations predicted detection of subtle and extreme emotions in faces. We used a simple model to predict emotion detection scores from participants' fixation patterns. The best fit of this model heavily weighted fixations to the eyes in detecting subtle fear, disgust and surprise, with less weight, or zero weight, given to mouth and nose fixations. However, this model could not successfully predict detection of subtle happiness, or extreme emotional expressions, with the exception of fear. These findings argue that detection of most subtle emotions is best served by fixations to the eyes, with some contribution from nose and mouth fixations. In contrast, detection of extreme emotions and subtle happiness appeared to be less dependent on fixation patterns. The results offer a new perspective on some puzzling dissociations in the neuropsychological literature, and a novel analytic approach for the study of eye gaze in social or emotional settings.

  14. The activation of visual face memory and explicit face recognition are delayed in developmental prosopagnosia.

    Science.gov (United States)

    Parketny, Joanna; Towler, John; Eimer, Martin

    2015-08-01

    Individuals with developmental prosopagnosia (DP) are strongly impaired in recognizing faces, but the causes of this deficit are not well understood. We employed event-related brain potentials (ERPs) to study the time-course of neural processes involved in the recognition of previously unfamiliar faces in DPs and in age-matched control participants with normal face recognition abilities. Faces of different individuals were presented sequentially in one of three possible views, and participants had to detect a specific Target Face ("Joe"). EEG was recorded during task performance to Target Faces, Nontarget Faces, or the participants' Own Face (which had to be ignored). The N250 component was measured as a marker of the match between a seen face and a stored representation in visual face memory. The subsequent P600f was measured as an index of attentional processes associated with the conscious awareness and recognition of a particular face. Target Faces elicited reliable N250 and P600f in the DP group, but both of these components emerged later in DPs than in control participants. This shows that the activation of visual face memory for previously unknown learned faces and the subsequent attentional processing and conscious recognition of these faces are delayed in DP. N250 and P600f components to Own Faces did not differ between the two groups, indicating that the processing of long-term familiar faces is less affected in DP. However, P600f components to Own Faces were absent in two participants with DP who failed to recognize their Own Face during the experiment. These results provide new evidence that face recognition deficits in DP may be linked to a delayed activation of visual face memory and explicit identity recognition mechanisms.

  15. Face recognition using improved-LDA with facial combined feature

    Institute of Scientific and Technical Information of China (English)

    Dake Zhou; Xin Yang; Ningsong Peng

    2005-01-01

    @@ Face recognition subjected to various conditions is a challenging task. This paper presents a combined feature improved Fisher classifier method for face recognition. Both of the facial holistic information and local information are used for face representation. In addition, the improved linear discriminant analysis (I-LDA) is employed for good generalization capability. Experiments show that the method is not only robust to moderate changes of illumination, pose and facial expression but also superior to the traditional methods, such as eigenfaces and Fisherfaces.

  16. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    Science.gov (United States)

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain.

  17. Time course of implicit processing and explicit processing of emotional faces and emotional words.

    Science.gov (United States)

    Frühholz, Sascha; Jellinghaus, Anne; Herrmann, Manfred

    2011-05-01

    Facial expressions are important emotional stimuli during social interactions. Symbolic emotional cues, such as affective words, also convey information regarding emotions that is relevant for social communication. Various studies have demonstrated fast decoding of emotions from words, as was shown for faces, whereas others report a rather delayed decoding of information about emotions from words. Here, we introduced an implicit (color naming) and explicit task (emotion judgment) with facial expressions and words, both containing information about emotions, to directly compare the time course of emotion processing using event-related potentials (ERP). The data show that only negative faces affected task performance, resulting in increased error rates compared to neutral faces. Presentation of emotional faces resulted in a modulation of the N170, the EPN and the LPP components and these modulations were found during both the explicit and implicit tasks. Emotional words only affected the EPN during the explicit task, but a task-independent effect on the LPP was revealed. Finally, emotional faces modulated source activity in the extrastriate cortex underlying the generation of the N170, EPN and LPP components. Emotional words led to a modulation of source activity corresponding to the EPN and LPP, but they also affected the N170 source on the right hemisphere. These data show that facial expressions affect earlier stages of emotion processing compared to emotional words, but the emotional value of words may have been detected at early stages of emotional processing in the visual cortex, as was indicated by the extrastriate source activity.

  18. Automatic recognition of emotions from facial expressions

    Science.gov (United States)

    Xue, Henry; Gertner, Izidor

    2014-06-01

    In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).

  19. A Novel Face Segmentation Algorithm from a Video Sequence for Real-Time Face Recognition

    Directory of Open Access Journals (Sweden)

    Sudhaker Samuel RD

    2007-01-01

    Full Text Available The first step in an automatic face recognition system is to localize the face region in a cluttered background and carefully segment the face from each frame of a video sequence. In this paper, we propose a fast and efficient algorithm for segmenting a face suitable for recognition from a video sequence. The cluttered background is first subtracted from each frame, in the foreground regions, a coarse face region is found using skin colour. Then using a dynamic template matching approach the face is efficiently segmented. The proposed algorithm is fast and suitable for real-time video sequence. The algorithm is invariant to large scale and pose variation. The segmented face is then handed over to a recognition algorithm based on principal component analysis and linear discriminant analysis. The online face detection, segmentation, and recognition algorithms take an average of 0.06 second on a 3.2 GHz P4 machine.

  20. 5-HTTLPR differentially predicts brain network responses to emotional faces

    DEFF Research Database (Denmark)

    Fisher, Patrick M; Grady, Cheryl L; Madsen, Martin K

    2015-01-01

    The effects of the 5-HTTLPR polymorphism on neural responses to emotionally salient faces have been studied extensively, focusing on amygdala reactivity and amygdala-prefrontal interactions. Despite compelling evidence that emotional face paradigms engage a distributed network of brain regions in...

  1. Facial expressions of emotions: recognition accuracy and affective reactions during late childhood.

    Science.gov (United States)

    Mancini, Giacomo; Agnoli, Sergio; Baldaro, Bruno; Bitti, Pio E Ricci; Surcinelli, Paola

    2013-01-01

    The present study examined the development of recognition ability and affective reactions to emotional facial expressions in a large sample of school-aged children (n = 504, ages 8-11 years of age). Specifically, the study aimed to investigate if changes in the emotion recognition ability and the affective reactions associated with the viewing of facial expressions occur during late childhood. Moreover, because small but robust gender differences during late-childhood have been proposed, the effects of gender on the development of emotion recognition and affective responses were examined. The results showed an overall increase in emotional face recognition ability from 8 to 11 years of age, particularly for neutral and sad expressions. However, the increase in sadness recognition was primarily due to the development of this recognition in boys. Moreover, our results indicate different developmental trends in males and females regarding the recognition of disgust. Last, developmental changes in affective reactions to emotional facial expressions were found. Whereas recognition ability increased over the developmental time period studied, affective reactions elicited by facial expressions were characterized by a decrease in arousal over the course of late childhood.

  2. An Approach to Face Recognition of 2-D Images Using Eigen Faces and PCA

    Directory of Open Access Journals (Sweden)

    Annapurna Mishra

    2012-04-01

    Full Text Available Face detection is to find any face in a given image. Face recognition is a two-dimension problem used for detecting faces. The information contained in a face can be analysed automatically by this system like identity, gender, expression, age, race and pose. Normally face detection is done for a single image but it can also be extended for video stream. As the face images are normally upright, they can be described by a small set of 2-D characteristics views. Here the face images are projected to a feature space or face space to encode the variation between the known face images. The projected feature space or the face space can be defined as ‘eigenfaces’ and can be formed by eigenvectors of the face image set. The above process can be used to recognize a new face in unsupervised manner. This paper introduces an algorithm which is used for effective face recognition. It takes into consideration not only the face extraction but also the mathematical calculations which enable us to bring the image into a simple and technical form. It can also be implemented in real-time using data acquisition hardware and software interface with the face recognition systems. Face recognition can be applied to various domains including security systems, personal identification, image and film processing and human computer interaction.

  3. Impaired processing of self-face recognition in anorexia nervosa.

    Science.gov (United States)

    Hirot, France; Lesage, Marine; Pedron, Lya; Meyer, Isabelle; Thomas, Pierre; Cottencin, Olivier; Guardia, Dewi

    2016-03-01

    Body image disturbances and massive weight loss are major clinical symptoms of anorexia nervosa (AN). The aim of the present study was to examine the influence of body changes and eating attitudes on self-face recognition ability in AN. Twenty-seven subjects suffering from AN and 27 control participants performed a self-face recognition task (SFRT). During the task, digital morphs between their own face and a gender-matched unfamiliar face were presented in a random sequence. Participants' self-face recognition failures, cognitive flexibility, body concern and eating habits were assessed with the Self-Face Recognition Questionnaire (SFRQ), Trail Making Test (TMT), Body Shape Questionnaire (BSQ) and Eating Disorder Inventory-2 (EDI-2), respectively. Subjects suffering from AN exhibited significantly greater difficulties than control participants in identifying their own face (p = 0.028). No significant difference was observed between the two groups for TMT (all p > 0.1, non-significant). Regarding predictors of self-face recognition skills, there was a negative correlation between SFRT and body mass index (p = 0.01) and a positive correlation between SFRQ and EDI-2 (p < 0.001) or BSQ (p < 0.001). Among factors involved, nutritional status and intensity of eating disorders could play a part in impaired self-face recognition.

  4. Are there differential deficits in facial emotion recognition between paranoid and non-paranoid schizophrenia? A signal detection analysis.

    Science.gov (United States)

    Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long

    2013-10-30

    This study assessed facial emotion recognition abilities in subjects with paranoid and non-paranoid schizophrenia (NPS) using signal detection theory. We explore the differential deficits in facial emotion recognition in 44 paranoid patients with schizophrenia (PS) and 30 non-paranoid patients with schizophrenia (NPS), compared to 80 healthy controls. We used morphed faces with different intensities of emotion and computed the sensitivity index (d') of each emotion. The results showed that performance differed between the schizophrenia and healthy controls groups in the recognition of both negative and positive affects. The PS group performed worse than the healthy controls group but better than the NPS group in overall performance. Performance differed between the NPS and healthy controls groups in the recognition of all basic emotions and neutral faces; between the PS and healthy controls groups in the recognition of angry faces; and between the PS and NPS groups in the recognition of happiness, anger, sadness, disgust, and neutral affects. The facial emotion recognition impairment in schizophrenia may reflect a generalized deficit rather than a negative-emotion specific deficit. The PS group performed worse than the control group, but better than the NPS group in facial expression recognition, with differential deficits between PS and NPS patients.

  5. Memory for faces and voices varies as a function of sex and expressed emotion.

    Science.gov (United States)

    S Cortes, Diana; Laukka, Petri; Lindahl, Christina; Fischer, Håkan

    2017-01-01

    We investigated how memory for faces and voices (presented separately and in combination) varies as a function of sex and emotional expression (anger, disgust, fear, happiness, sadness, and neutral). At encoding, participants judged the expressed emotion of items in forced-choice tasks, followed by incidental Remember/Know recognition tasks. Results from 600 participants showed that accuracy (hits minus false alarms) was consistently higher for neutral compared to emotional items, whereas accuracy for specific emotions varied across the presentation modalities (i.e., faces, voices, and face-voice combinations). For the subjective sense of recollection ("remember" hits), neutral items received the highest hit rates only for faces, whereas for voices and face-voice combinations anger and fear expressions instead received the highest recollection rates. We also observed better accuracy for items by female expressers, and own-sex bias where female participants displayed memory advantage for female faces and face-voice combinations. Results further suggest that own-sex bias can be explained by recollection, rather than familiarity, rates. Overall, results show that memory for faces and voices may be influenced by the expressions that they carry, as well as by the sex of both items and participants. Emotion expressions may also enhance the subjective sense of recollection without enhancing memory accuracy.

  6. Impaired recognition of musical emotions and facial expressions following anteromedial temporal lobe excision.

    Science.gov (United States)

    Gosselin, Nathalie; Peretz, Isabelle; Hasboun, Dominique; Baulac, Michel; Samson, Séverine

    2011-10-01

    We have shown that an anteromedial temporal lobe resection can impair the recognition of scary music in a prior study (Gosselin et al., 2005). In other studies (Adolphs et al., 2001; Anderson et al., 2000), similar results have been obtained with fearful facial expressions. These findings suggest that scary music and fearful faces may be processed by common cerebral structures. To assess this possibility, we tested patients with unilateral anteromedial temporal excision and normal controls in two emotional tasks. In the task of identifying musical emotion, stimuli evoked either fear, peacefulness, happiness or sadness. Participants were asked to rate to what extent each stimulus expressed these four emotions on 10-point scales. The task of facial emotion included morphed stimuli whose expression varied from faint to more pronounced and evoked fear, happiness, sadness, surprise, anger or disgust. Participants were requested to select the appropriate label. Most patients were found to be impaired in the recognition of both scary music and fearful faces. Furthermore, the results in both tasks were correlated, suggesting a multimodal representation of fear within the amygdala. However, inspection of individual results showed that recognition of fearful faces can be preserved whereas recognition of scary music can be impaired. Such a dissociation found in two cases suggests that fear recognition in faces and in music does not necessarily involve exactly the same cerebral networks and this hypothesis is discussed in light of the current literature.

  7. Interpretation of emotionally ambiguous faces in older adults.

    Science.gov (United States)

    Bucks, Romola S; Garner, Matthew; Tarrant, Louise; Bradley, Brendan P; Mogg, Karin

    2008-11-01

    Research suggests that there is an age-related decline in the processing of negative emotional information, which may contribute to the reported decline in emotional problems in older people. We used a signal detection approach to investigate the effect of normal aging on the interpretation of ambiguous emotional facial expressions. High-functioning older and younger adults indicated which emotion they perceived when presented with morphed faces containing a 60% to 40% blend of two emotions (mixtures of happy, sad, or angry faces). They also completed measures of mood, perceptual ability, and cognitive functioning. Older and younger adults did not differ significantly in their ability to discriminate between positive and negative emotions. Response-bias measures indicated that older adults were significantly less likely than younger adults to report the presence of anger in angry-happy face blends. Results are discussed in relation to other research into age-related effects on emotion processing.

  8. A ROBUST EYE LOCALIZATION ALGORITHM FOR FACE RECOGNITION

    Institute of Scientific and Technical Information of China (English)

    Zhang Wencong; Li Xin; Yao Peng; Li Bin; Zhuang Zhenquan

    2008-01-01

    The accuracy of face alignment affects greatly the performance of a face recognition system.Since the face alignment is usually conducted using eye positions, the algorithm for accurate eye localization is essential for the accurate face recognition. In this paper, an algorithm is proposed for eye localization. First, the proper AdaBoost detection is adaptively trained to segment the region based on the special gray distribution in the region. After that, a fast radial symmetry operator is used to precisely locate the center of eyes. Experimental results show that the method can accurately locate the eyes, and it is robust to the variations of face poses, illuminations, expressions, and accessories.

  9. Method for Face-Emotion Retrieval Using A Cartoon Emotional Expression Approach

    Science.gov (United States)

    Kostov, Vlaho; Yanagisawa, Hideyoshi; Johansson, Martin; Fukuda, Shuichi

    A simple method for extracting emotion from a human face, as a form of non-verbal communication, was developed to cope with and optimize mobile communication in a globalized and diversified society. A cartoon face based model was developed and used to evaluate emotional content of real faces. After a pilot survey, basic rules were defined and student subjects were asked to express emotion using the cartoon face. Their face samples were then analyzed using principal component analysis and the Mahalanobis distance method. Feature parameters considered as having relations with emotions were extracted and new cartoon faces (based on these parameters) were generated. The subjects evaluated emotion of these cartoon faces again and we confirmed these parameters were suitable. To confirm how these parameters could be applied to real faces, we asked subjects to express the same emotions which were then captured electronically. Simple image processing techniques were also developed to extract these features from real faces and we then compared them with the cartoon face parameters. It is demonstrated via the cartoon face that we are able to express the emotions from very small amounts of information. As a result, real and cartoon faces correspond to each other. It is also shown that emotion could be extracted from still and dynamic real face images using these cartoon-based features.

  10. A Neural Model of Face Recognition: a Comprehensive Approach

    Science.gov (United States)

    Stara, Vera; Montesanto, Anna; Puliti, Paolo; Tascini, Guido; Sechi, Cristina

    Visual recognition of faces is an essential behavior of humans: we have optimal performance in everyday life and just such a performance makes us able to establish the continuity of actors in our social life and to quickly identify and categorize people. This remarkable ability justifies the general interest in face recognition of researchers belonging to different fields and specially of designers of biometrical identification systems able to recognize the features of person's faces in a background. Due to interdisciplinary nature of this topic in this contribute we deal with face recognition through a comprehensive approach with the purpose to reproduce some features of human performance, as evidenced by studies in psychophysics and neuroscience, relevant to face recognition. This approach views face recognition as an emergent phenomenon resulting from the nonlinear interaction of a number of different features. For this reason our model of face recognition has been based on a computational system implemented through an artificial neural network. This synergy between neuroscience and engineering efforts allowed us to implement a model that had a biological plausibility, performed the same tasks as human subjects, and gave a possible account of human face perception and recognition. In this regard the paper reports on an experimental study of performance of a SOM-based neural network in a face recognition task, with reference both to the ability to learn to discriminate different faces, and to the ability to recognize a face already encountered in training phase, when presented in a pose or with an expression differing from the one present in the training context.

  11. Emotion Recognition Abilities and Empathy of Victims of Bullying

    Science.gov (United States)

    Woods, Sarah; Wolke, Dieter; Nowicki, Stephen; Hall, Lynne

    2009-01-01

    Objectives: Bullying is a form of systematic abuse by peers with often serious consequences for victims. Few studies have considered the role of emotion recognition abilities and empathic behaviour for different bullying roles. This study investigated physical and relational bullying involvement in relation to basic emotion recognition abilities,…

  12. Influences on Facial Emotion Recognition in Deaf Children

    Science.gov (United States)

    Sidera, Francesc; Amadó, Anna; Martínez, Laura

    2017-01-01

    This exploratory research is aimed at studying facial emotion recognition abilities in deaf children and how they relate to linguistic skills and the characteristics of deafness. A total of 166 participants (75 deaf) aged 3-8 years were administered the following tasks: facial emotion recognition, naming vocabulary and cognitive ability. The…

  13. Face Recognition Using Local Quantized Patterns and Gabor Filters

    Science.gov (United States)

    Khryashchev, V.; Priorov, A.; Stepanova, O.; Nikitin, A.

    2015-05-01

    The problem of face recognition in a natural or artificial environment has received a great deal of researchers' attention over the last few years. A lot of methods for accurate face recognition have been proposed. Nevertheless, these methods often fail to accurately recognize the person in difficult scenarios, e.g. low resolution, low contrast, pose variations, etc. We therefore propose an approach for accurate and robust face recognition by using local quantized patterns and Gabor filters. The estimation of the eye centers is used as a preprocessing stage. The evaluation of our algorithm on different samples from a standardized FERET database shows that our method is invariant to the general variations of lighting, expression, occlusion and aging. The proposed approach allows about 20% correct recognition accuracy increase compared with the known face recognition algorithms from the OpenCV library. The additional use of Gabor filters can significantly improve the robustness to changes in lighting conditions.

  14. Tolerance of geometric distortions in infant's face recognition.

    Science.gov (United States)

    Yamashita, Wakayo; Kanazawa, So; Yamaguchi, Masami K

    2014-02-01

    The aim of the current study is to reveal the effect of global linear transformations (shearing, horizontal stretching, and vertical stretching) on the recognition of familiar faces (e.g., a mother's face) in 6- to 7-month-old infants. In this experiment, we applied the global linear transformations to both the infants' own mother's face and to a stranger's face, and we tested infants' preference between these faces. We found that only 7-month-old infants maintained preference for their own mother's face during the presentation of vertical stretching, while the preference for the mother's face disappeared during the presentation of shearing or horizontal stretching. These findings suggest that 7-month-old infants might not recognize faces based on calculating the absolute distance between facial features, and that the vertical dimension of facial features might be more related to infants' face recognition rather than the horizontal dimension.

  15. 2D Methods for pose invariant face recognition

    CSIR Research Space (South Africa)

    Mokoena, Ntabiseng

    2016-12-01

    Full Text Available The ability to recognise face images under random pose is a task that is done effortlessly by human beings. However, for a computer system, recognising face images under varying poses still remains an open research area. Face recognition across pose...

  16. Impact of eye detection error on face recognition performance

    NARCIS (Netherlands)

    Dutta, A.; Günther, Manuel; El Shafey, Laurent; Veldhuis, Raymond N.J.; Spreeuwers, Lieuwe Jan

    2015-01-01

    The locations of the eyes are the most commonly used features to perform face normalisation (i.e. alignment of facial features), which is an essential preprocessing stage of many face recognition systems. In this study, the authors study the sensitivity of open source implementations of five face

  17. Impact of eye detection error on face recognition performance

    NARCIS (Netherlands)

    Dutta, Abhishek; Günther, Manuel; El Shafey, Laurent; Veldhuis, Raymond; Spreeuwers, Luuk

    2015-01-01

    The locations of the eyes are the most commonly used features to perform face normalisation (i.e. alignment of facial features), which is an essential preprocessing stage of many face recognition systems. In this study, the authors study the sensitivity of open source implementations of five face re

  18. VIRTUAL AVATAR FOR EMOTION RECOGNITION IN PATIENTS WITH SCHIZOPHRENIA: A PILOT STUDY

    Directory of Open Access Journals (Sweden)

    Samuel Marcos Pablos

    2016-08-01

    Full Text Available Persons who suffer from schizophrenia have difficulties in recognizing emotions in others’ facial expressions, which affects their capabilities for social interaction and hinders their social integration. Photographic images have traditionally been used to explore emotion recognition impairments in schizophrenia patients, which lack of the dynamism that is inherent to face to face social interactions. In order to overcome those inconveniences, in the present work the use of an animated, virtual face is approached. The avatar has the appearance of a highly realistic human face and is able to express different emotions dynamically, introducing some advantages over photograph-based approaches such as its dynamic appearance.We present the results of a pilot study in order to assess the validity of the interface as a tool for clinical psychiatrists. 20 subjects who suffer from schizophrenia of long evolution and 20 control subjects were invited to recognize a set of facial emotions showed by a virtual avatar and images. The objective of the study is to explore the possibilities of using a realistic-looking avatar for the assessment of emotion recognition deficits in patients who suffer schizophrenia. Our results suggest that the proposed avatar may be a suitable tool for the diagnosis and treatment of deficits in the facial recognition of emotions.

  19. Beyond emotion recognition deficits: A theory guided analysis of emotion processing in Huntington's disease.

    Science.gov (United States)

    Kordsachia, Catarina C; Labuschagne, Izelle; Stout, Julie C

    2017-02-01

    Deficits in facial emotion recognition in Huntington's disease (HD) have been extensively researched, however, a theory-based integration of these deficits into the broader picture of emotion processing is lacking. To describe the full extent of emotion processing deficits we reviewed the clinical research literature in HD, including a consideration of research in Parkinson's disease, guided by a theoretical model on emotion processing, the Component Process Model. Further, to contribute to understanding the mechanisms underlying deficient emotion recognition, we discussed the literature in light of specific emotion recognition theories. Current evidence from HD studies indicates deficits in the production of emotional facial expressions and alterations in subjective emotional experiences, in addition to emotion recognition deficits. Conceptual understanding of emotions remains relatively intact. Impaired recognition and expression of emotion in HD might be linked, whereas altered emotional experiences appear to be unrelated to emotion recognition. A key implication of this review is the need to take all the components of emotion processing into account to understand specific deficits in neurodegenerative diseases.

  20. Putting the face in context: Body expressions impact facial emotion processing in human infants

    Directory of Open Access Journals (Sweden)

    Purva Rajhans

    2016-06-01

    Full Text Available Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs. We primed infants with body postures (fearful, happy that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception.

  1. Multi-feature fusion for thermal face recognition

    Science.gov (United States)

    Bi, Yin; Lv, Mingsong; Wei, Yangjie; Guan, Nan; Yi, Wang

    2016-07-01

    Human face recognition has been researched for the last three decades. Face recognition with thermal images now attracts significant attention since they can be used in low/none illuminated environment. However, thermal face recognition performance is still insufficient for practical applications. One main reason is that most existing work leverage only single feature to characterize a face in a thermal image. To solve the problem, we propose multi-feature fusion, a technique that combines multiple features in thermal face characterization and recognition. In this work, we designed a systematical way to combine four features, including Local binary pattern, Gabor jet descriptor, Weber local descriptor and Down-sampling feature. Experimental results show that our approach outperforms methods that leverage only a single feature and is robust to noise, occlusion, expression, low resolution and different l1 -minimization methods.

  2. Classification Accuracy of Neural Networks with PCA in Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Novakovic Jasmina

    2011-04-01

    Full Text Available This paper presents classification accuracy of neural network with principal component analysis (PCA for feature selections in emotion recognition using facial expressions. Dimensionality reduction of a feature set is a common preprocessing step used for pattern recognition and classification applications. PCA is one of the popular methods used, and can be shown to be optimal using different optimality criteria. Experiment results, in which we achieved a recognition rate of approximately 85% when testing six emotions on benchmark image data set, show that neural networks with PCA is effective in emotion recognition using facial expressions.

  3. SPEECH EMOTION RECOGNITION USING MODIFIED QUADRATIC DISCRIMINATION FUNCTION

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Quadratic Discrimination Function(QDF)is commonly used in speech emotion recognition,which proceeds on the premise that the input data is normal distribution.In this Paper,we propose a transformation to normalize the emotional features,then derivate a Modified QDF(MQDF) to speech emotion recognition.Features based on prosody and voice quality are extracted and Principal Component Analysis Neural Network (PCANN) is used to reduce dimension of the feature vectors.The results show that voice quality features are effective supplement for recognition.and the method in this paper could improve the recognition ratio effectively.

  4. Automatic Emotion Recognition in Speech: Possibilities and Significance

    Directory of Open Access Journals (Sweden)

    Milana Bojanić

    2009-12-01

    Full Text Available Automatic speech recognition and spoken language understanding are crucial steps towards a natural humanmachine interaction. The main task of the speech communication process is the recognition of the word sequence, but the recognition of prosody, emotion and stress tags may be of particular importance as well. This paper discusses thepossibilities of recognition emotion from speech signal in order to improve ASR, and also provides the analysis of acoustic features that can be used for the detection of speaker’s emotion and stress. The paper also provides a short overview of emotion and stress classification techniques. The importance and place of emotional speech recognition is shown in the domain of human-computer interactive systems and transaction communication model. The directions for future work are given at the end of this work.

  5. Effect of positive emotion on consolidation of memory for faces: the modulation of facial valence and facial gender.

    Science.gov (United States)

    Wang, Bo

    2013-01-01

    Studies have shown that emotion elicited after learning enhances memory consolidation. However, no prior studies have used facial photos as stimuli. This study examined the effect of post-learning positive emotion on consolidation of memory for faces. During the learning participants viewed neutral, positive, or negative faces. Then they were assigned to a condition in which they either watched a 9-minute positive video clip, or a 9-minute neutral video. Then 30 minutes after the learning participants took a surprise memory test, in which they made "remember", "know", and "new" judgements. The findings are: (1) Positive emotion enhanced consolidation of recognition for negative male faces, but impaired consolidation of recognition for negative female faces; (2) For males, recognition for negative faces was equivalent to that for positive faces; for females, recognition for negative faces was better than that for positive faces. Our study provides the important evidence that effect of post-learning emotion on memory consolidation can extend to facial stimuli and such an effect can be modulated by facial valence and facial gender. The findings may shed light on establishing models concerning the influence of emotion on memory consolidation.

  6. 3D face database for human pattern recognition

    Science.gov (United States)

    Song, LiMei; Lu, Lu

    2008-10-01

    Face recognition is an essential work to ensure human safety. It is also an important task in biomedical engineering. 2D image is not enough for precision face recognition. 3D face data includes more exact information, such as the precision size of eyes, mouth, etc. 3D face database is an important part in human pattern recognition. There is a lot of method to get 3D data, such as 3D laser scan system, 3D phase measurement, shape from shading, shape from motion, etc. This paper will introduce a non-orbit, non-contact, non-laser 3D measurement system. The main idea is from shape from stereo technique. Two cameras are used in different angle. A sequence of light will project on the face. Human face, human head, human tooth, human body can all be measured by the system. The visualization data of each person can form to a large 3D face database, which can be used in human recognition. The 3D data can provide a vivid copy of a face, so the recognition exactness can be reached to 100%. Although the 3D data is larger than 2D image, it can be used in the occasion where only few people include, such as the recognition of a family, a small company, etc.

  7. Facial Emotion Recognition Using Context Based Multimodal Approach

    Directory of Open Access Journals (Sweden)

    Priya Metri

    2011-12-01

    Full Text Available Emotions play a crucial role in person to person interaction. In recent years, there has been a growing interest in improving all aspects of interaction between humans and computers. The ability to understand human emotions is desirable for the computer in several applications especially by observing facial expressions. This paper explores a ways of human-computer interaction that enable the computer to be more aware of the user’s emotional expressions we present a approach for the emotion recognition from a facial expression, hand and body posture. Our model uses multimodal emotion recognition system in which we use two different models for facial expression recognition and for hand and body posture recognition and then combining the result of both classifiers using a third classifier which give the resulting emotion . Multimodal system gives more accurate result than a signal or bimodal system

  8. Young and older emotional faces: are there age group differences in expression identification and memory?

    Science.gov (United States)

    Ebner, Natalie C; Johnson, Marcia K

    2009-06-01

    Studies have found that older compared with young adults are less able to identify facial expressions and have worse memory for negative than for positive faces, but those studies have used only young faces. Studies finding that both age groups are more accurate at recognizing faces of their own than other ages have used mostly neutral faces. Thus, age differences in processing faces may not extend to older faces, and preferential memory for own age faces may not extend to emotional faces. To investigate these possibilities, young and older participants viewed young and older faces presented either with happy, angry, or neutral expressions; participants identified the expressions displayed and then completed a surprise face recognition task. Older compared with young participants were less able to identify expressions of angry young and older faces and (based on participants' categorizations) remembered angry faces less well than happy faces. There was no evidence of an own age bias in memory, but self-reported frequency of contact with young and older adults and awareness of own emotions played a role in expression identification of and memory for young and older faces.

  9. Primary vision and facial emotion recognition in early Parkinson's disease.

    Science.gov (United States)

    Hipp, Géraldine; Diederich, Nico J; Pieria, Vannina; Vaillant, Michel

    2014-03-15

    In early stages of idiopathic Parkinson's disease (IPD), lower order vision (LOV) deficits including reduced colour and contrast discrimination have been consistently reported. Data are less conclusive concerning higher order vision (HOV) deficits, especially for facial emotion recognition (FER). However, a link between both visual levels has been hypothesized. To screen for both levels of visual impairment in early IPD. We prospectively recruited 28 IPD patients with disease duration of 1.4+/-0.8 years and 25 healthy controls. LOV was evaluated by Farnsworth-Munsell 100 Hue Test, Vis-Tech and Pelli-Robson test. HOV was examined by the Ekman 60 Faces Test and part A of the Visual Object and Space recognition test. IPD patients performed worse than controls on almost all LOV tests. The most prominent difference was seen for contrast perception at the lowest spatial frequency (p=0.0002). Concerning FER IPD patients showed reduced recognition of "sadness" (p=0.01). "Fear" perception was correlated with perception of low contrast sensitivity in IPD patients within the lowest performance quartile. Controls showed a much stronger link between "fear" perception" and low contrast detection. At the early IPD stage there are marked deficits of LOV performances, while HOV performances are still intact, with the exception of reduced recognition of "sadness". At this stage, IPD patients seem still to compensate the deficient input of low contrast sensitivity, known to be pivotal for appreciation of negative facial emotions and confirmed as such for healthy controls in this study. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  10. Efficient Face Recognition in Video by Bit Planes Slicing

    Directory of Open Access Journals (Sweden)

    Srinivasa R. Inbathini

    2012-01-01

    Full Text Available Problem statement: Video-based face recognition must be able to overcome the imaging interference such as pose and illumination. Approach: A model is designed to study for face recognition based on video sequence and also test image. In training stage, single frontal image is taken as a input to the recognition system. A new virtual image is generated using bit plane feature fusion to effectively reduce the sensitivity to illumination variances. A Self-PCA is performed to get each set of Eigen faces and to get projected image. In recognition stage, automatic face detection scheme is first applied to the video sequences. Frames are extracted from the video and virtual frame is created. Each bit plane of test face is extracted and then the feature fusion face is constructed, followed by the projection and reconstruction using each set of the corresponding Eigen faces. Results: This algorithm is compared with conventional PCA algorithm. The minimum error of reconstruction is calculated. If error is less than a threshold value, then it recognizes the face from the database. Conclusion: Bit plane slicing mechanism is applied in video based face recognition. Experimental results shows that its far more superior than conventional method under various pose and illumination condition.

  11. Video-based face recognition via convolutional neural networks

    Science.gov (United States)

    Bao, Tianlong; Ding, Chunhui; Karmoshi, Saleem; Zhu, Ming

    2017-06-01

    Face recognition has been widely studied recently while video-based face recognition still remains a challenging task because of the low quality and large intra-class variation of video captured face images. In this paper, we focus on two scenarios of video-based face recognition: 1)Still-to-Video(S2V) face recognition, i.e., querying a still face image against a gallery of video sequences; 2)Video-to-Still(V2S) face recognition, in contrast to S2V scenario. A novel method was proposed in this paper to transfer still and video face images to an Euclidean space by a carefully designed convolutional neural network, then Euclidean metrics are used to measure the distance between still and video images. Identities of still and video images that group as pairs are used as supervision. In the training stage, a joint loss function that measures the Euclidean distance between the predicted features of training pairs and expanding vectors of still images is optimized to minimize the intra-class variation while the inter-class variation is guaranteed due to the large margin of still images. Transferred features are finally learned via the designed convolutional neural network. Experiments are performed on COX face dataset. Experimental results show that our method achieves reliable performance compared with other state-of-the-art methods.

  12. The Impact of Early Bilingualism on Face Recognition Processes

    Science.gov (United States)

    Kandel, Sonia; Burfin, Sabine; Méary, David; Ruiz-Tada, Elisa; Costa, Albert; Pascalis, Olivier

    2016-01-01

    Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker’s face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes). Face recognition processes were investigated through two classic effects in face recognition studies: the Other-Race Effect (ORE) and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race), Chinese faces (other race), and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals’ face processing abilities differ from monolinguals’. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation. PMID:27486422

  13. The impact of early bilingualism on face recognition processes

    Directory of Open Access Journals (Sweden)

    Sonia Kandel

    2016-07-01

    Full Text Available Early linguistic experience has an impact on the way we decode audiovisual speech in face-to-face communication. The present study examined whether differences in visual speech decoding could be linked to a broader difference in face processing. To identify a phoneme we have to do an analysis of the speaker’s face to focus on the relevant cues for speech decoding (e.g., locating the mouth with respect to the eyes. Face recognition processes were investigated through two classic effects in face recognition studies: the Other Race Effect (ORE and the Inversion Effect. Bilingual and monolingual participants did a face recognition task with Caucasian faces (own race, Chinese faces (other race and cars that were presented in an Upright or Inverted position. The results revealed that monolinguals exhibited the classic ORE. Bilinguals did not. Overall, bilinguals were slower than monolinguals. These results suggest that bilinguals’ face processing abilities differ from monolinguals’. Early exposure to more than one language may lead to a perceptual organization that goes beyond language processing and could extend to face analysis. We hypothesize that these differences could be due to the fact that bilinguals focus on different parts of the face than monolinguals, making them more efficient in other race face processing but slower. However, more studies using eye-tracking techniques are necessary to confirm this explanation.

  14. Does aging impair first impression accuracy? Differentiating emotion recognition from complex social inferences.

    Science.gov (United States)

    Krendl, Anne C; Rule, Nicholas O; Ambady, Nalini

    2014-09-01

    Young adults can be surprisingly accurate at making inferences about people from their faces. Although these first impressions have important consequences for both the perceiver and the target, it remains an open question whether first impression accuracy is preserved with age. Specifically, could age differences in impressions toward others stem from age-related deficits in accurately detecting complex social cues? Research on aging and impression formation suggests that young and older adults show relative consensus in their first impressions, but it is unknown whether they differ in accuracy. It has been widely shown that aging disrupts emotion recognition accuracy, and that these impairments may predict deficits in other social judgments, such as detecting deceit. However, it is unclear whether general impression formation accuracy (e.g., emotion recognition accuracy, detecting complex social cues) relies on similar or distinct mechanisms. It is important to examine this question to evaluate how, if at all, aging might affect overall accuracy. Here, we examined whether aging impaired first impression accuracy in predicting real-world outcomes and categorizing social group membership. Specifically, we studied whether emotion recognition accuracy and age-related cognitive decline (which has been implicated in exacerbating deficits in emotion recognition) predict first impression accuracy. Our results revealed that emotion recognition accuracy did not predict first impression accuracy, nor did age-related cognitive decline impair it. These findings suggest that domains of social perception outside of emotion recognition may rely on mechanisms that are relatively unimpaired by aging.

  15. Automatic landmark detection and face recognition for side-view face images

    NARCIS (Netherlands)

    Santemiz, Pinar; Spreeuwers, Luuk J.; Veldhuis, Raymond N.J.; Broemme, Arslan; Busch, Christoph

    2013-01-01

    In real-life scenarios where pose variation is up to side-view positions, face recognition becomes a challenging task. In this paper we propose an automatic side-view face recognition system designed for home-safety applications. Our goal is to recognize people as they pass through doors in order to

  16. Face Feature Extraction for Recognition Using Radon Transform

    Directory of Open Access Journals (Sweden)

    Justice Kwame Appati

    2016-07-01

    Full Text Available Face recognition for some time now has been a challenging exercise especially when it comes to recognizing faces with different pose. This perhaps is due to the use of inappropriate descriptors during the feature extraction stage. In this paper, a thorough examination of the Radon Transform as a face signature descriptor was investigated on one of the standard database. The global features were rather considered by constructing a Gray Level Co-occurrences Matrices (GLCMs. Correlation, Energy, Homogeneity and Contrast are computed from each image to form the feature vector for recognition. We showed that, the transformed face signatures are robust and invariant to the different pose. With the statistical features extracted, face training classes are optimally broken up through the use of Support Vector Machine (SVM whiles recognition rate for test face images are computed based on the L1 norm.

  17. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition

    Directory of Open Access Journals (Sweden)

    Rong Wang

    2015-01-01

    Full Text Available In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  18. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition.

    Science.gov (United States)

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  19. Emotional contexts modulate intentional memory suppression of neutral faces: Insights from ERPs.

    Science.gov (United States)

    Pierguidi, Lapo; Righi, Stefania; Gronchi, Giorgio; Marzi, Tessa; Caharel, Stephanie; Giovannelli, Fabio; Viggiano, Maria Pia

    2016-08-01

    The main goal of present work is to gain new insight into the temporal dynamics underlying the voluntary memory control for neutral faces associated with neutral, positive and negative contexts. A directed forgetting (DF) procedure was used during the recording of EEG to answer the question whether is it possible to forget a face that has been encoded within a particular emotional context. A face-scene phase in which a neutral face was showed in a neutral or emotional scene (positive, negative) was followed by the voluntary memory cue (cue phase) indicating whether the face had to-be remember or to-be-forgotten (TBR and TBF). Memory for faces was then assessed with an old/new recognition task. Behaviorally, we found that it is harder to suppress faces-in-positive-scenes compared to faces-in-negative and neutral-scenes. The temporal information obtained by the ERPs showed: 1) during the face-scene phase, the Late Positive Potential (LPP), which indexes motivated emotional attention, was larger for faces-in-negative-scenes compared to faces-in-neutral-scenes. 2) Remarkably, during the cue phase, ERPs were significantly modulated by the emotional contexts. Faces-in-neutral scenes showed an ERP pattern that has been typically associated to DF effect whereas faces-in-positive-scenes elicited the reverse ERP pattern. Faces-in-negative scenes did not show differences in the DF-related neural activities but larger N1 amplitude for TBF vs. TBR faces may index early attentional deployment. These results support the hypothesis that the pleasantness or unpleasantness of the contexts (through attentional broadening and narrowing mechanisms, respectively) may modulate the effectiveness of intentional memory suppression for neutral information. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Face Recognition Combining Eigen Features with a Parzen Classifier

    Institute of Scientific and Technical Information of China (English)

    SUN Xin; LIU Bing; LIU Ben-yong

    2005-01-01

    A face recognition scheme is proposed, wherein a face image is preprocessed by pixel averaging and energy normalizing to reduce data dimension and brightness variation effect, followed by the Fourier transform to estimate the spectrum of the preprocessed image. The principal component analysis is conducted on the spectra of a face image to obtain eigen features. Combining eigen features with a Parzen classifier, experiments are taken on the ORL face database.

  1. Facial emotion recognition in adolescents with personality pathology.

    Science.gov (United States)

    Berenschot, Fleur; van Aken, Marcel A G; Hessels, Christel; de Castro, Bram Orobio; Pijl, Ysbrand; Montagne, Barbara; van Voorst, Guus

    2014-07-01

    It has been argued that a heightened emotional sensitivity interferes with the cognitive processing of facial emotion recognition and may explain the intensified emotional reactions to external emotional stimuli of adults with personality pathology, such as borderline personality disorder (BPD). This study examines if and how deviations in facial emotion recognition also occur in adolescents with personality pathology. Forty-two adolescents with personality pathology, 111 healthy adolescents and 28 psychiatric adolescents without personality pathology completed the Emotion Recognition Task, measuring their accuracy and sensitivity in recognizing positive and negative emotion expressions presented in several, morphed, expression intensities. Adolescents with personality pathology showed an enhanced recognition accuracy of facial emotion expressions compared to healthy adolescents and clients with various Axis-I psychiatric diagnoses. They were also more sensitive to less intensive expressions of emotions than clients with various Axis-I psychiatric diagnoses, but not more than healthy adolescents. As has been shown in research on adults with BPD, adolescents with personality pathology show enhanced facial emotion recognition.

  2. Improving Negative Emotion Recognition in Young Offenders Reduces Subsequent Crime.

    Directory of Open Access Journals (Sweden)

    Kelly Hubble

    Full Text Available Children with antisocial behaviour show deficits in the perception of emotional expressions in others that may contribute to the development and persistence of antisocial and aggressive behaviour. Current treatments for antisocial youngsters are limited in effectiveness. It has been argued that more attention should be devoted to interventions that target neuropsychological correlates of antisocial behaviour. This study examined the effect of emotion recognition training on criminal behaviour.Emotion recognition and crime levels were studied in 50 juvenile offenders. Whilst all young offenders received their statutory interventions as the study was conducted, a subgroup of twenty-four offenders also took part in a facial affect training aimed at improving emotion recognition. Offenders in the training and control groups were matched for age, SES, IQ and lifetime crime level. All offenders were tested twice for emotion recognition performance, and recent crime data were collected after the testing had been completed.Before the training there were no differences between the groups in emotion recognition, with both groups displaying poor fear, sadness and anger recognition. After the training fear, sadness and anger recognition improved significantly in juvenile offenders in the training group. Although crime rates dropped in all offenders in the 6 months following emotion testing, only the group of offenders who had received the emotion training showed a significant reduction in the severity of the crimes they committed.The study indicates that emotion recognition can be relatively easily improved in youths who engage in serious antisocial and criminal behavior. The results suggest that improved emotion recognition has the potential to reduce the severity of reoffending.

  3. Hybrid SVM/HMM Method for Face Recognition

    Institute of Scientific and Technical Information of China (English)

    刘江华; 陈佳品; 程君实

    2004-01-01

    A face recognition system based on Support Vector Machine (SVM) and Hidden Markov Model (HMM) has been proposed. The powerful discriminative ability of SVM is combined with the temporal modeling ability of HMM. The output of SVM is moderated to be probability output, which replaces the Mixture of Gauss (MOG) in HMM. Wavelet transformation is used to extract observation vector, which reduces the data dimension and improves the robustness.The hybrid system is compared with pure HMM face recognition method based on ORL face database and Yale face database. Experiments results show that the hybrid method has better performance.

  4. Iterative closest normal point for 3D face recognition.

    Science.gov (United States)

    Mohammadzade, Hoda; Hatzinakos, Dimitrios

    2013-02-01

    The common approach for 3D face recognition is to register a probe face to each of the gallery faces and then calculate the sum of the distances between their points. This approach is computationally expensive and sensitive to facial expression variation. In this paper, we introduce the iterative closest normal point method for finding the corresponding points between a generic reference face and every input face. The proposed correspondence finding method samples a set of points for each face, denoted as the closest normal points. These points are effectively aligned across all faces, enabling effective application of discriminant analysis methods for 3D face recognition. As a result, the expression variation problem is addressed by minimizing the within-class variability of the face samples while maximizing the between-class variability. As an important conclusion, we show that the surface normal vectors of the face at the sampled points contain more discriminatory information than the coordinates of the points. We have performed comprehensive experiments on the Face Recognition Grand Challenge database, which is presently the largest available 3D face database. We have achieved verification rates of 99.6 and 99.2 percent at a false acceptance rate of 0.1 percent for the all versus all and ROC III experiments, respectively, which, to the best of our knowledge, have seven and four times less error rates, respectively, compared to the best existing methods on this database.

  5. Feature based sliding window technique for face recognition

    Science.gov (United States)

    Javed, Muhammad Younus; Mohsin, Syed Maajid; Anjum, Muhammad Almas

    2010-02-01

    Human beings are commonly identified by biometric schemes which are concerned with identifying individuals by their unique physical characteristics. The use of passwords and personal identification numbers for detecting humans are being used for years now. Disadvantages of these schemes are that someone else may use them or can easily be forgotten. Keeping in view of these problems, biometrics approaches such as face recognition, fingerprint, iris/retina and voice recognition have been developed which provide a far better solution when identifying individuals. A number of methods have been developed for face recognition. This paper illustrates employment of Gabor filters for extracting facial features by constructing a sliding window frame. Classification is done by assigning class label to the unknown image that has maximum features similar to the image stored in the database of that class. The proposed system gives a recognition rate of 96% which is better than many of the similar techniques being used for face recognition.

  6. Understanding eye movements in face recognition using hidden Markov models.

    Science.gov (United States)

    Chuk, Tim; Chan, Antoni B; Hsiao, Janet H

    2014-09-16

    We use a hidden Markov model (HMM) based approach to analyze eye movement data in face recognition. HMMs are statistical models that are specialized in handling time-series data. We conducted a face recognition task with Asian participants, and model each participant's eye movement pattern with an HMM, which summarized the participant's scan paths in face recognition with both regions of interest and the transition probabilities among them. By clustering these HMMs, we showed that participants' eye movements could be categorized into holistic or analytic patterns, demonstrating significant individual differences even within the same culture. Participants with the analytic pattern had longer response times, but did not differ significantly in recognition accuracy from those with the holistic pattern. We also found that correct and wrong recognitions were associated with distinctive eye movement patterns; the difference between the two patterns lies in the transitions rather than locations of the fixations alone.

  7. Toward End-to-End Face Recognition Through Alignment Learning

    Science.gov (United States)

    Zhong, Yuanyi; Chen, Jiansheng; Huang, Bo

    2017-08-01

    Plenty of effective methods have been proposed for face recognition during the past decade. Although these methods differ essentially in many aspects, a common practice of them is to specifically align the facial area based on the prior knowledge of human face structure before feature extraction. In most systems, the face alignment module is implemented independently. This has actually caused difficulties in the designing and training of end-to-end face recognition models. In this paper we study the possibility of alignment learning in end-to-end face recognition, in which neither prior knowledge on facial landmarks nor artificially defined geometric transformations are required. Specifically, spatial transformer layers are inserted in front of the feature extraction layers in a Convolutional Neural Network (CNN) for face recognition. Only human identity clues are used for driving the neural network to automatically learn the most suitable geometric transformation and the most appropriate facial area for the recognition task. To ensure reproducibility, our model is trained purely on the publicly available CASIA-WebFace dataset, and is tested on the Labeled Face in the Wild (LFW) dataset. We have achieved a verification accuracy of 99.08\\% which is comparable to state-of-the-art single model based methods.

  8. 3D Face Recognition with Sparse Spherical Representations

    CERN Document Server

    Llonch, R Sala; Tosic, I; Frossard, P

    2008-01-01

    This paper addresses the problem of 3D face recognition using simultaneous sparse approximations on the sphere. The 3D face point clouds are first aligned with a novel and fully automated registration process. They are then represented as signals on the 2D sphere in order to preserve depth and geometry information. Next, we implement a dimensionality reduction process with simultaneous sparse approximations and subspace projection. It permits to represent each 3D face by only a few spherical functions that are able to capture the salient facial characteristics, and hence to preserve the discriminant facial information. We eventually perform recognition by effective matching in the reduced space, where Linear Discriminant Analysis can be further activated for improved recognition performance. The 3D face recognition algorithm is evaluated on the FRGC v.1.0 data set, where it is shown to outperform classical state-of-the-art solutions that work with depth images.

  9. Robust face recognition algorithm for identifition of disaster victims

    Science.gov (United States)

    Gevaert, Wouter J. R.; de With, Peter H. N.

    2013-02-01

    We present a robust face recognition algorithm for the identification of occluded, injured and mutilated faces with a limited training set per person. In such cases, the conventional face recognition methods fall short due to specific aspects in the classification. The proposed algorithm involves recursive Principle Component Analysis for reconstruction of afiected facial parts, followed by a feature extractor based on Gabor wavelets and uniform multi-scale Local Binary Patterns. As a classifier, a Radial Basis Neural Network is employed. In terms of robustness to facial abnormalities, tests show that the proposed algorithm outperforms conventional face recognition algorithms like, the Eigenfaces approach, Local Binary Patterns and the Gabor magnitude method. To mimic real-life conditions in which the algorithm would have to operate, specific databases have been constructed and merged with partial existing databases and jointly compiled. Experiments on these particular databases show that the proposed algorithm achieves recognition rates beyond 95%.

  10. Feature Extraction based Face Recognition, Gender and Age Classification

    Directory of Open Access Journals (Sweden)

    Venugopal K R

    2010-01-01

    Full Text Available The face recognition system with large sets of training sets for personal identification normally attains good accuracy. In this paper, we proposed Feature Extraction based Face Recognition, Gender and Age Classification (FEBFRGAC algorithm with only small training sets and it yields good results even with one image per person. This process involves three stages: Pre-processing, Feature Extraction and Classification. The geometric features of facial images like eyes, nose, mouth etc. are located by using Canny edge operator and face recognition is performed. Based on the texture and shape information gender and age classification is done using Posteriori Class Probability and Artificial Neural Network respectively. It is observed that the face recognition is 100%, the gender and age classification is around 98% and 94% respectively.

  11. An Approach to Face Recognition of 2-D Images Using Eigen Faces and PCA

    Directory of Open Access Journals (Sweden)

    Annapurna Mishra

    2012-05-01

    Full Text Available Face detection is to find any face in a given image. Face recognition is a two-dimension problem used fordetecting faces. The information contained in a face can be analysed automatically by this system likeidentity, gender, expression, age, race and pose. Normally face detection is done for a single image but itcan also be extended for video stream. As the face images are normally upright, they can be described by asmall set of 2-D characteristics views. Here the face images are projected to a feature space or face spaceto encode the variation between the known face images. The projected feature space or the face space canbe defined as ‘eigenfaces’ and can be formed by eigenvectors of the face image set. The above process canbe used to recognize a new face in unsupervised manner. This paper introduces an algorithm which is usedfor effective face recognition. It takes into consideration not only the face extraction but also themathematical calculations which enable us to bring the image into a simple and technical form. It can alsobe implemented in real-time using data acquisition hardware and software interface with the facerecognition systems. Face recognition can be applied to various domains including security systems,personal identification, image and film processing and human computer interaction.

  12. Multimodal emotion recognition as assessment for learning in a game-based communication skills training

    NARCIS (Netherlands)

    Nadolski, Rob; Bahreini, Kiavash; Westera, Wim

    2014-01-01

    This paper presentation describes how our FILTWAM software artifacts for face and voice emotion recognition will be used for assessing learners' progress and providing adequate feedback in an online game-based communication skills training. This constitutes an example of in-game assessment for mainl

  13. Multimodal Emotion Recognition for Assessment of Learning in a Game-Based Communication Skills Training

    NARCIS (Netherlands)

    Bahreini, Kiavash; Nadolski, Rob; Westera, Wim

    2015-01-01

    This paper describes how our FILTWAM software artifacts for face and voice emotion recognition will be used for assessing learners' progress and providing adequate feedback in an online game-based communication skills training. This constitutes an example of in-game assessment for mainly formative p

  14. Multimodal emotion recognition as assessment for learning in a game-based communication skills training

    NARCIS (Netherlands)

    Nadolski, Rob; Bahreini, Kiavash; Westera, Wim

    2014-01-01

    This paper presentation describes how our FILTWAM software artifacts for face and voice emotion recognition will be used for assessing learners' progress and providing adequate feedback in an online game-based communication skills training. This constitutes an example of in-game assessment for mainl

  15. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    Science.gov (United States)

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding.

  16. [Recognition of facial emotions and theory of mind in schizophrenia: could the theory of mind deficit be due to the non-recognition of facial emotions?].

    Science.gov (United States)

    Besche-Richard, C; Bourrin-Tisseron, A; Olivier, M; Cuervo-Lombard, C-V; Limosin, F

    2012-06-01

    The deficits of recognition of facial emotions and attribution of mental states are now well-documented in schizophrenic patients. However, we don't clearly know about the link between these two complex cognitive functions, especially in schizophrenia. In this study, we attempted to test the link between the recognition of facial emotions and the capacities of mentalization, notably the attribution of beliefs, in health and schizophrenic participants. We supposed that the level of performance of recognition of facial emotions, compared to the working memory and executive functioning, was the best predictor of the capacities to attribute a belief. Twenty schizophrenic participants according to DSM-IVTR (mean age: 35.9 years, S.D. 9.07; mean education level: 11.15 years, S.D. 2.58) clinically stabilized, receiving neuroleptic or antipsychotic medication participated in the study. They were matched on age (mean age: 36.3 years, S.D. 10.9) and educational level (mean educational level: 12.10, S.D. 2.25) with 30 matched healthy participants. All the participants were evaluated with a pool of tasks testing the recognition of facial emotions (the faces of Baron-Cohen), the attribution of beliefs (two stories of first order and two stories of second order), the working memory (the digit span of the WAIS-III and the Corsi test) and the executive functioning (Trail Making Test A et B, Wisconsin Card Sorting Test brief version). Comparing schizophrenic and healthy participants, our results confirmed a difference between the performances of the recognition of facial emotions and those of the attribution of beliefs. The result of the simple linear regression showed that the recognition of facial emotions, compared to the performances of working memory and executive functioning, was the best predictor of the performances in the theory of mind stories. Our results confirmed, in a sample of schizophrenic patients, the deficits in the recognition of facial emotions and in the

  17. Face-body integration of intense emotional expressions of victory and defeat

    Science.gov (United States)

    Wang, Lili; Xia, Lisheng; Zhang, Dandan

    2017-01-01

    Human facial expressions can be recognized rapidly and effortlessly. However, for intense emotions from real life, positive and negative facial expressions are difficult to discriminate and the judgment of facial expressions is biased towards simultaneously perceived body expressions. This study employed event-related potentials (ERPs) to investigate the neural dynamics involved in the integration of emotional signals from facial and body expressions of victory and defeat. Emotional expressions of professional players were used to create pictures of face-body compounds, with either matched or mismatched emotional expressions in faces and bodies. Behavioral results showed that congruent emotional information of face and body facilitated the recognition of facial expressions. ERP data revealed larger P1 amplitudes for incongruent compared to congruent stimuli. Also, a main effect of body valence on the P1 was observed, with enhanced amplitudes for the stimuli with losing compared to winning bodies. The main effect of body expression was also observed in N170 and N2, with winning bodies producing larger N170/N2 amplitudes. In the later stage, a significant interaction of congruence by body valence was found on the P3 component. Winning bodies elicited lager P3 amplitudes than losing bodies did when face and body conveyed congruent emotional signals. Beyond the knowledge based on prototypical facial and body expressions, the results of this study facilitate us to understand the complexity of emotion evaluation and categorization out of laboratory. PMID:28245245

  18. A Parallel Framework for Multilayer Perceptron for Human Face Recognition

    CERN Document Server

    Bhowmik, M K; Nasipuri, M; Basu, D K; Kundu, M

    2010-01-01

    Artificial neural networks have already shown their success in face recognition and similar complex pattern recognition tasks. However, a major disadvantage of the technique is that it is extremely slow during training for larger classes and hence not suitable for real-time complex problems such as pattern recognition. This is an attempt to develop a parallel framework for the training algorithm of a perceptron. In this paper, two general architectures for a Multilayer Perceptron (MLP) have been demonstrated. The first architecture is All-Class-in-One-Network (ACON) where all the classes are placed in a single network and the second one is One-Class-in-One-Network (OCON) where an individual single network is responsible for each and every class. Capabilities of these two architectures were compared and verified in solving human face recognition, which is a complex pattern recognition task where several factors affect the recognition performance like pose variations, facial expression changes, occlusions, and ...

  19. An exploratory study on emotion recognition in patients with a clinically isolated syndrome and multiple sclerosis.

    Science.gov (United States)

    Jehna, Margit; Neuper, Christa; Petrovic, Katja; Wallner-Blazek, Mirja; Schmidt, Reinhold; Fuchs, Siegrid; Fazekas, Franz; Enzinger, Christian

    2010-07-01

    Multiple sclerosis (MS) is a chronic multifocal CNS disorder which can affect higher order cognitive processes. Whereas cognitive disturbances in MS are increasingly better characterised, emotional facial expression (EFE) has rarely been tested, despite its importance for adequate social behaviour. We tested 20 patients with a clinically isolated syndrome suggestive of MS (CIS) or MS and 23 healthy controls (HC) for the ability to differ between emotional facial stimuli, controlling for the influence of depressive mood (ADS-L). We screened for cognitive dysfunction using The Faces Symbol Test (FST). The patients demonstrated significant decreased reaction-times regarding emotion recognition tests compared to HC. However, the results also suggested worse cognitive abilities in the patients. Emotional and cognitive test results were correlated. This exploratory pilot study suggests that emotion recognition deficits might be prevalent in MS. However, future studies will be needed to overcome the limitations of this study. Copyright 2010 Elsevier B.V. All rights reserved.

  20. Do people have insight into their face recognition abilities?

    Science.gov (United States)

    Palermo, Romina; Rossion, Bruno; Rhodes, Gillian; Laguesse, Renaud; Tez, Tolga; Hall, Bronwyn; Albonico, Andrea; Malaspina, Manuela; Daini, Roberta; Irons, Jessica; Al-Janabi, Shahd; Taylor, Libby C; Rivolta, Davide; McKone, Elinor

    2017-02-01

    Diagnosis of developmental or congenital prosopagnosia (CP) involves self-report of everyday face recognition difficulties, which are corroborated with poor performance on behavioural tests. This approach requires accurate self-evaluation. We examine the extent to which typical adults have insight into their face recognition abilities across four experiments involving nearly 300 participants. The experiments used five tests of face recognition ability: two that tap into the ability to learn and recognize previously unfamiliar faces [the Cambridge Face Memory Test, CFMT; Duchaine, B., & Nakayama, K. (2006). The Cambridge Face Memory Test: Results for neurologically intact individuals and an investigation of its validity using inverted face stimuli and prosopagnosic participants. Neuropsychologia, 44(4), 576-585. doi:10.1016/j.neuropsychologia.2005.07.001; and a newly devised test based on the CFMT but where the study phases involve watching short movies rather than viewing static faces-the CFMT-Films] and three that tap face matching [Benton Facial Recognition Test, BFRT; Benton, A., Sivan, A., Hamsher, K., Varney, N., & Spreen, O. (1983). Contribution to neuropsychological assessment. New York: Oxford University Press; and two recently devised sequential face matching tests]. Self-reported ability was measured with the 15-item Kennerknecht et al. questionnaire [Kennerknecht, I., Ho, N. Y., & Wong, V. C. (2008). Prevalence of hereditary prosopagnosia (HPA) in Hong Kong Chinese population. American Journal of Medical Genetics Part A, 146A(22), 2863-2870. doi:10.1002/ajmg.a.32552]; two single-item questions assessing face recognition ability; and a new 77-item meta-cognition questionnaire. Overall, we find that adults with typical face recognition abilities have only modest insight into their ability to recognize faces on behavioural tests. In a fifth experiment, we assess self-reported face recognition ability in people with CP and find that some people who expect to

  1. Face recognition performance of individuals with Asperger syndrome on the Cambridge Face Memory Test.

    Science.gov (United States)

    Hedley, Darren; Brewer, Neil; Young, Robyn

    2011-12-01

    Although face recognition deficits in individuals with Autism Spectrum Disorder (ASD), including Asperger syndrome (AS), are widely acknowledged, the empirical evidence is mixed. This in part reflects the failure to use standardized and psychometrically sound tests. We contrasted standardized face recognition scores on the Cambridge Face Memory Test (CFMT) for 34 individuals with AS with those for 42, IQ-matched non-ASD individuals, and age-standardized scores from a large Australian cohort. We also examined the influence of IQ, autistic traits, and negative affect on face recognition performance. Overall, participants with AS performed significantly worse on the CFMT than the non-ASD participants and when evaluated against standardized test norms. However, while 24% of participants with AS presented with severe face recognition impairment (>2 SDs below the mean), many individuals performed at or above the typical level for their age: 53% scored within +/- 1 SD of the mean and 9% demonstrated superior performance (>1 SD above the mean). Regression analysis provided no evidence that IQ, autistic traits, or negative affect significantly influenced face recognition: diagnostic group membership was the only significant predictor of face recognition performance. In sum, face recognition performance in ASD is on a continuum, but with average levels significantly below non-ASD levels of performance.

  2. What Types of Visual Recognition Tasks Are Mediated by the Neural Subsystem that Subserves Face Recognition?

    Science.gov (United States)

    Brooks, Brian E.; Cooper, Eric E.

    2006-01-01

    Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…

  3. REAL TIME FACE RECOGNITION USING ADABOOST IMPROVED FAST PCA ALGORITHM

    Directory of Open Access Journals (Sweden)

    K. Susheel Kumar

    2011-08-01

    Full Text Available This paper presents an automated system for human face recognition in a real time background world fora large homemade dataset of persons face. The task is very difficult as the real time backgroundsubtraction in an image is still a challenge. Addition to this there is a huge variation in human face imagein terms of size, pose and expression. The system proposed collapses most of this variance. To detect realtime human face AdaBoost with Haar cascade is used and a simple fast PCA and LDA is used torecognize the faces detected. The matched face is then used to mark attendance in the laboratory, in ourcase. This biometric system is a real time attendance system based on the human face recognition with asimple and fast algorithms and gaining a high accuracy rate..

  4. ALTERED KINEMATICS OF FACIAL EMOTION EXPRESSION AND EMOTION RECOGNITION DEFICITS ARE UNRELATED IN PARKINSON'S DISEASE

    Directory of Open Access Journals (Sweden)

    Matteo Bologna

    2016-12-01

    Full Text Available Background: Altered emotional processing, including reduced emotion facial expression and defective emotion recognition, has been reported in patients with Parkinson’s disease (PD. However, few studies have objectively investigated facial expression abnormalities in PD using neurophysiological techniques. It is not known whether altered facial expression and recognition in PD are related. Objective: To investigate possible deficits in facial emotion expression and emotion recognition and their relationship, if any, in patients with PD. Methods: Eighteen patients with PD and 16 healthy controls were enrolled in the study. Facial expressions of emotion were recorded using a 3D optoelectronic system and analysed using the facial action coding system. Possible deficits in emotion recognition were assessed using the Ekman test. Participants were assessed in one experimental session. Possible relationship between the kinematic variables of facial emotion expression, the Ekman test scores and clinical and demographic data in patients were evaluated using the Spearman’s test and multiple regression analysis.Results: The facial expression of all six basic emotions had slower velocity and lower amplitude in patients in comparison to healthy controls (all Ps0.05. Finally, no relationship emerged between kinematic variables of facial emotion expression, the Ekman test scores and clinical and demographic data in patients (all Ps>0.05.Conclusion: The present results provide further evidence of altered emotional processing in PD. The lack of any correlation between altered facial emotion expression kinematics and emotion recognition deficits in patients suggests that these abnormalities are mediated by separate pathophysiological mechanisms.

  5. Facial emotion recognition and alexithymia in adults with somatoform disorders.

    Science.gov (United States)

    Pedrosa Gil, Francisco; Ridout, Nathan; Kessler, Henrik; Neuffer, Michaela; Schoechlin, Claudia; Traue, Harald C; Nickel, Marius

    2009-01-01

    The primary aim of this study was to investigate facial emotion recognition in patients with somatoform disorders (SFD). Also of interest was the extent to which concurrent alexithymia contributed to any changes in emotion recognition accuracy. Twenty patients with SFD and twenty healthy, age, sex and education matched, controls were assessed with the Facially Expressed Emotion Labelling Test of facial emotion recognition and the 26-item Toronto Alexithymia Scale (TAS-26). Patients with SFD exhibited elevated alexithymia symptoms relative to healthy controls. Patients with SFD also recognized significantly fewer emotional expressions than did the healthy controls. However, the group difference in emotion recognition accuracy became nonsignificant once the influence of alexithymia was controlled for statistically. This suggests that the deficit in facial emotion recognition observed in the patients with SFD was most likely a consequence of concurrent alexithymia. Impaired facial emotion recognition observed in the patients with SFD could plausibly have a negative influence on these individuals' social functioning. (c) 2008 Wiley-Liss, Inc.

  6. Ethical aspects of face recognition systems in public places.

    NARCIS (Netherlands)

    Brey, Philip A.E.

    2004-01-01

    This essay examines ethical aspects of the use of facial recognition technology for surveillance purposes in public and semipublic areas, focusing particularly on the balance between security and privacy and civil liberties. As a case study, the FaceIt facial recognition engine of Identix Corporatio

  7. A comparative study of baseline algorithms of face recognition

    NARCIS (Netherlands)

    Mehmood, Zahid; Ali, Tauseef; Khattak, Shahid; Khan, Samee U.

    2014-01-01

    In this paper we present a comparative study of two well-known face recognition algorithms. The contribution of this work is to reveal the robustness of each FR algorithm with respect to various factors, such as variation in pose and low resolution of the images used for recognition. This evaluation

  8. Robust Multi biometric Recognition Using Face and Ear Images

    CERN Document Server

    Boodoo, Nazmeen Bibi

    2009-01-01

    This study investigates the use of ear as a biometric for authentication and shows experimental results obtained on a newly created dataset of 420 images. Images are passed to a quality module in order to reduce False Rejection Rate. The Principal Component Analysis (eigen ear) approach was used, obtaining 90.7 percent recognition rate. Improvement in recognition results is obtained when ear biometric is fused with face biometric. The fusion is done at decision level, achieving a recognition rate of 96 percent.

  9. Newborns' Face Recognition over Changes in Viewpoint

    Science.gov (United States)

    Turati, Chiara; Bulf, Hermann; Simion, Francesca

    2008-01-01

    The study investigated the origins of the ability to recognize faces despite rotations in depth. Four experiments are reported that tested, using the habituation technique, whether 1-to-3-day-old infants are able to recognize the invariant aspects of a face over changes in viewpoint. Newborns failed to recognize facial perceptual invariances…

  10. Novel averaging window filter for SIFT in infrared face recognition

    Institute of Scientific and Technical Information of China (English)

    Junfeng Bai; Yong Ma; Jing Li; Fan Fan; Hongyuan Wang

    2011-01-01

    The extraction of stable local features directly affects the performance of infrared face recognition algorithms. Recent studies on the application of scale invariant feature transform (SIFT) to infrared face recognition show that star-styled window filter (SWF) can filter out errors incorrectly introduced by SIFT. The current letter proposes an improved filter pattern called Y-styled window filter (YWF) to further eliminate the wrong matches. Compared with SWF, YWF patterns are sparser and do not maintain rotation invariance; thus, they are more suitable to infrared face recognition. Our experimental results demonstrate that a YWF-based averaging window outperforms an SWF-based one in reducing wrong matches, therefore improving the reliability of infrared face recognition systems.%@@ The extraction of stable local features directly affects the performance of infrared face recognition algorithms.Recent studies on the application of scale invariant feature transform(SIFT) to infrared face recognition show that star-styled window filter(SWF) can filter out errors incorrectly introduced by SIFT.

  11. Word and face recognition deficits following posterior cerebral artery stroke

    DEFF Research Database (Denmark)

    Kuhn, Christina D.; Asperud Thomsen, Johanne; Delfi, Tzvetelina

    2016-01-01

    Abstract Recent findings have challenged the existence of category specific brain areas for perceptual processing of words and faces, suggesting the existence of a common network supporting the recognition of both. We examined the performance of patients with focal lesions in posterior cortical...... areas to investigate whether deficits in recognition of words and faces systematically co-occur as would be expected if both functions rely on a common cerebral network. Seven right-handed patients with unilateral brain damage following stroke in areas supplied by the posterior cerebral artery were...... included (four with right hemisphere damage, three with left, tested at least 1 year post stroke). We examined word and face recognition using a delayed match-to-sample paradigm using four different categories of stimuli: cropped faces, full faces, words, and cars. Reading speed and word length effects...

  12. Association with emotional information alters subsequent processing of neutral faces.

    Science.gov (United States)

    Riggs, Lily; Fujioka, Takako; Chan, Jessica; McQuiggan, Douglas A; Anderson, Adam K; Ryan, Jennifer D

    2014-01-01

    The processing of emotional as compared to neutral information is associated with different patterns in eye movement and neural activity. However, the 'emotionality' of a stimulus can be conveyed not only by its physical properties, but also by the information that is presented with it. There is very limited work examining the how emotional information may influence the immediate perceptual processing of otherwise neutral information. We examined how presenting an emotion label for a neutral face may influence subsequent processing by using eye movement monitoring (EMM) and magnetoencephalography (MEG) simultaneously. Participants viewed a series of faces with neutral expressions. Each face was followed by a unique negative or neutral sentence to describe that person, and then the same face was presented in isolation again. Viewing of faces paired with a negative sentence was associated with increased early viewing of the eye region and increased neural activity between 600 and 1200 ms in emotion processing regions such as the cingulate, medial prefrontal cortex, and amygdala, as well as posterior regions such as the precuneus and occipital cortex. Viewing of faces paired with a neutral sentence was associated with increased activity in the parahippocampal gyrus during the same time window. By monitoring behavior and neural activity within the same paradigm, these findings demonstrate that emotional information alters subsequent visual scanning and the neural systems that are presumably invoked to maintain a representation of the neutral information along with its emotional details.

  13. The Complete Gabor-Fisher Classifier for Robust Face Recognition

    Directory of Open Access Journals (Sweden)

    Štruc Vitomir

    2010-01-01

    Full Text Available Abstract This paper develops a novel face recognition technique called Complete Gabor Fisher Classifier (CGFC. Different from existing techniques that use Gabor filters for deriving the Gabor face representation, the proposed approach does not rely solely on Gabor magnitude information but effectively uses features computed based on Gabor phase information as well. It represents one of the few successful attempts found in the literature of combining Gabor magnitude and phase information for robust face recognition. The novelty of the proposed CGFC technique comes from (1 the introduction of a Gabor phase-based face representation and (2 the combination of the recognition technique using the proposed representation with classical Gabor magnitude-based methods into a unified framework. The proposed face recognition framework is assessed in a series of face verification and identification experiments performed on the XM2VTS, Extended YaleB, FERET, and AR databases. The results of the assessment suggest that the proposed technique clearly outperforms state-of-the-art face recognition techniques from the literature and that its performance is almost unaffected by the presence of partial occlusions of the facial area, changes in facial expression, or severe illumination changes.

  14. Error Rates in Users of Automatic Face Recognition Software.

    Science.gov (United States)

    White, David; Dunn, James D; Schmid, Alexandra C; Kemp, Richard I

    2015-01-01

    In recent years, wide deployment of automatic face recognition systems has been accompanied by substantial gains in algorithm performance. However, benchmarking tests designed to evaluate these systems do not account for the errors of human operators, who are often an integral part of face recognition solutions in forensic and security settings. This causes a mismatch between evaluation tests and operational accuracy. We address this by measuring user performance in a face recognition system used to screen passport applications for identity fraud. Experiment 1 measured target detection accuracy in algorithm-generated 'candidate lists' selected from a large database of passport images. Accuracy was notably poorer than in previous studies of unfamiliar face matching: participants made over 50% errors for adult target faces, and over 60% when matching images of children. Experiment 2 then compared performance of student participants to trained passport officers-who use the system in their daily work-and found equivalent performance in these groups. Encouragingly, a group of highly trained and experienced "facial examiners" outperformed these groups by 20 percentage points. We conclude that human performance curtails accuracy of face recognition systems-potentially reducing benchmark estimates by 50% in operational settings. Mere practise does not attenuate these limits, but superior performance of trained examiners suggests that recruitment and selection of human operators, in combination with effective training and mentorship, can improve the operational accuracy of face recognition systems.

  15. Emotional Intelligence as Assessed by Situational Judgment and Emotion Recognition Tests: Building the Nomological Net

    Directory of Open Access Journals (Sweden)

    Carolyn MacCann

    2011-12-01

    Full Text Available Recent research on emotion recognition ability (ERA suggests that the capacity to process emotional information may differ for disparate emotions. However, little research has examined whether this findings holds for emotional understanding and emotion management, as well as emotion recognition. Moreover, little research has examined whether the abilities to recognize emotions, understand emotions, and manage emotions form a distinct emotional intelligence (EI construct that is independent from traditional cognitive ability factors. The current study addressed these issues. Participants (N=118 completed two ERA measures, two situational judgment tests assessing emotional understanding and emotion management, and three cognitive ability tests. Exploratory and confirmatory factor analyses of both the understanding and management item parcels showed that a three-factor model relating to fear, sadness, and anger content was a better fit than a one-factor model, supporting an emotion-specific view of EI. In addition, an EI factor composed of emotion recognition, emotional understanding, and emotion management was distinct from a cognitive ability factor composed of a matrices task, general knowledge test, and reading comprehension task. Results are discussed in terms of their potential implications for theory and practice, as well as the integration of EI research with known models of cognitive ability.

  16. Can we distinguish emotions from faces? Investigation of implicit and explicit processes of peak facial expressions

    Directory of Open Access Journals (Sweden)

    Yanmei Wang

    2016-08-01

    Full Text Available Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes

  17. Developmental Changes in Face Recognition during Childhood: Evidence from Upright and Inverted Faces

    Science.gov (United States)

    de Heering, Adelaide; Rossion, Bruno; Maurer, Daphne

    2012-01-01

    Adults are experts at recognizing faces but there is controversy about how this ability develops with age. We assessed 6- to 12-year-olds and adults using a digitized version of the Benton Face Recognition Test, a sensitive tool for assessing face perception abilities. Children's response times for correct responses did not decrease between ages 6…

  18. Face Engagement during Infancy Predicts Later Face Recognition Ability in Younger Siblings of Children with Autism

    Science.gov (United States)

    de Klerk, Carina C. J. M.; Gliga, Teodora; Charman, Tony; Johnson, Mark H.

    2014-01-01

    Face recognition difficulties are frequently documented in children with autism spectrum disorders (ASD). It has been hypothesized that these difficulties result from a reduced interest in faces early in life, leading to decreased cortical specialization and atypical development of the neural circuitry for face processing. However, a recent study…

  19. Eye-tracking analysis of face observing and face recognition

    Directory of Open Access Journals (Sweden)

    Andrej Iskra

    2016-07-01

    Full Text Available Images are one of the key elements of the content of the World Wide Web. One group of web images are also photos of people. When various institutions (universities, research organizations, companies, associations, etc. present their staff, they should include photos of people for the purpose of more informative presentation. The fact is, that there are many specifies how people see face images and how do they remember them. Several methods to investigate person’s behavior during use of web content can be performed and one of the most reliable method among them is eye tracking. It is very common technique, particularly when it comes to observing web images. Our research focused on behavior of observing face images in process of memorizing them. Test participants were presented with face images shown at different time scale. We focused on three main face elements: eyes, mouth and nose. The results of our analysis can help not only in web presentation, which are, in principle, not limited by time observation, but especially in public presentations (conferences, symposia, and meetings.

  20. Independent component analysis of edge information for face recognition

    CERN Document Server

    Karande, Kailash Jagannath

    2013-01-01

    The book presents research work on face recognition using edge information as features for face recognition with ICA algorithms. The independent components are extracted from edge information. These independent components are used with classifiers to match the facial images for recognition purpose. In their study, authors have explored Canny and LOG edge detectors as standard edge detection methods. Oriented Laplacian of Gaussian (OLOG) method is explored to extract the edge information with different orientations of Laplacian pyramid. Multiscale wavelet model for edge detection is also propos

  1. DIFFERENCE FEATURE NEURAL NETWORK IN RECOGNITION OF HUMAN FACES

    Institute of Scientific and Technical Information of China (English)

    Chen Gang; Qi Feihu

    2001-01-01

    This article discusses vision recognition process and finds out that human recognizes objects not by their isolated features, but by their main difference features which people get by contrasting them. According to the resolving character of difference features for vision recognition, the difference feature neural network(DFNN) which is the improved auto-associative neural network is proposed.Using ORL database, the comparative experiment for face recognition with face images and the ones added Gaussian noise is performed, and the result shows that DFNN is better than the auto-associative neural network and it proves DFNN is more efficient.

  2. Robust emotion recognition using spectral and prosodic features

    CERN Document Server

    Rao, K Sreenivasa

    2013-01-01

    In this brief, the authors discuss recently explored spectral (sub-segmental and pitch synchronous) and prosodic (global and local features at word and syllable levels in different parts of the utterance) features for discerning emotions in a robust manner. The authors also delve into the complementary evidences obtained from excitation source, vocal tract system and prosodic features for the purpose of enhancing emotion recognition performance. Features based on speaking rate characteristics are explored with the help of multi-stage and hybrid models for further improving emotion recognition performance. Proposed spectral and prosodic features are evaluated on real life emotional speech corpus.

  3. Emotion Recognition from Persian Speech with Neural Network

    Directory of Open Access Journals (Sweden)

    Mina Hamidi

    2012-10-01

    Full Text Available In this paper, we report an effort towards automatic recognition of emotional states from continuousPersian speech. Due to the unavailability of appropriate database in the Persian language for emotionrecognition, at first, we built a database of emotional speech in Persian. This database consists of 2400wave clips modulated with anger, disgust, fear, sadness, happiness and normal emotions. Then we extractprosodic features, including features related to the pitch, intensity and global characteristics of the speechsignal. Finally, we applied neural networks for automatic recognition of emotion. The resulting averageaccuracy was about 78%.

  4. Face Recognition System based on SURF and LDA Technique

    Directory of Open Access Journals (Sweden)

    Narpat A. Singh

    2016-02-01

    Full Text Available In the past decade, Improve the quality in face recognition system is a challenge. It is a challenging problem and widely studied in the different type of imag-es to provide the best quality of faces in real life. These problems come due to illumination and pose effect due to light in gradient features. The improvement and optimization of human face recognition and detection is an important problem in the real life that can be handles to optimize the error rate, accuracy, peak signal to noise ratio, mean square error, and structural similarity Index. Now-a-days, there several methods are proposed to recognition face in different problem to optimize above parameters. There occur many invariant changes in hu-man faces due to the illumination and pose variations. In this paper we proposed a novel method in face recogni-tion to improve the quality parameters using speed up robust feature and linear discriminant analysis for opti-mize result. SURF is used for feature matching. In this paper, we use linear discriminant analysis for the edge dimensions reduction to live faces from our data-sets. The proposed method shows the better result as compare to the previous result on the basis of comparative analysis because our method show the better quality and better results in live images of face.

  5. The Development of Spatial Frequency Biases in Face Recognition

    Science.gov (United States)

    Leonard, Hayley C.; Karmiloff-Smith, Annette; Johnson, Mark H.

    2010-01-01

    Previous research has suggested that a mid-band of spatial frequencies is critical to face recognition in adults, but few studies have explored the development of this bias in children. We present a paradigm adapted from the adult literature to test spatial frequency biases throughout development. Faces were presented on a screen with particular…

  6. Supervised Filter Learning for Representation Based Face Recognition.

    Directory of Open Access Journals (Sweden)

    Chao Bi

    Full Text Available Representation based classification methods, such as Sparse Representation Classification (SRC and Linear Regression Classification (LRC have been developed for face recognition problem successfully. However, most of these methods use the original face images without any preprocessing for recognition. Thus, their performances may be affected by some problematic factors (such as illumination and expression variances in the face images. In order to overcome this limitation, a novel supervised filter learning algorithm is proposed for representation based face recognition in this paper. The underlying idea of our algorithm is to learn a filter so that the within-class representation residuals of the faces' Local Binary Pattern (LBP features are minimized and the between-class representation residuals of the faces' LBP features are maximized. Therefore, the LBP features of filtered face images are more discriminative for representation based classifiers. Furthermore, we also extend our algorithm for heterogeneous face recognition problem. Extensive experiments are carried out on five databases and the experimental results verify the efficacy of the proposed algorithm.

  7. Development of Face Recognition in Infant Chimpanzees (Pan Troglodytes)

    Science.gov (United States)

    Myowa-Yamakoshi, M.; Yamaguchi, M.K.; Tomonaga, M.; Tanaka, M.; Matsuzawa, T.

    2005-01-01

    In this paper, we assessed the developmental changes in face recognition by three infant chimpanzees aged 1-18 weeks, using preferential-looking procedures that measured the infants' eye- and head-tracking of moving stimuli. In Experiment 1, we prepared photographs of the mother of each infant and an ''average'' chimpanzee face using…

  8. The Change in Facial Emotion Recognition Ability in Inpatients with Treatment Resistant Schizophrenia After Electroconvulsive Therapy.

    Science.gov (United States)

    Dalkıran, Mihriban; Tasdemir, Akif; Salihoglu, Tamer; Emul, Murat; Duran, Alaattin; Ugur, Mufit; Yavuz, Ruhi

    2017-09-01

    People with schizophrenia have impairments in emotion recognition along with other social cognitive deficits. In the current study, we aimed to investigate the immediate benefits of ECT on facial emotion recognition ability. Thirty-two treatment resistant patients with schizophrenia who have been indicated for ECT enrolled in the study. Facial emotion stimuli were a set of 56 photographs that depicted seven basic emotions: sadness, anger, happiness, disgust, surprise, fear, and neutral faces. The average age of the participants was 33.4 ± 10.5 years. The rate of recognizing the disgusted facial expression increased significantly after ECT (p changes were found in the rest of the facial expressions (p > 0.05). After the ECT, the time period of responding to the fear and happy facial expressions were significantly shorter (p < 0.05). Facial emotion recognition ability is an important social cognitive skill for social harmony, proper relation and living independently. At least, the ECT sessions do not seem to affect facial emotion recognition ability negatively and seem to improve identifying disgusted facial emotion which is related with dopamine enriched regions in brain.

  9. Robust Face Recognition via Block Sparse Bayesian Learning

    Directory of Open Access Journals (Sweden)

    Taiyong Li

    2013-01-01

    Full Text Available Face recognition (FR is an important task in pattern recognition and computer vision. Sparse representation (SR has been demonstrated to be a powerful framework for FR. In general, an SR algorithm treats each face in a training dataset as a basis function and tries to find a sparse representation of a test face under these basis functions. The sparse representation coefficients then provide a recognition hint. Early SR algorithms are based on a basic sparse model. Recently, it has been found that algorithms based on a block sparse model can achieve better recognition rates. Based on this model, in this study, we use block sparse Bayesian learning (BSBL to find a sparse representation of a test face for recognition. BSBL is a recently proposed framework, which has many advantages over existing block-sparse-model-based algorithms. Experimental results on the Extended Yale B, the AR, and the CMU PIE face databases show that using BSBL can achieve better recognition rates and higher robustness than state-of-the-art algorithms in most cases.

  10. The own-age face recognition bias is task dependent.

    Science.gov (United States)

    Proietti, Valentina; Macchi Cassia, Viola; Mondloch, Catherine J

    2015-08-01

    The own-age bias (OAB) in face recognition (more accurate recognition of own-age than other-age faces) is robust among young adults but not older adults. We investigated the OAB under two different task conditions. In Experiment 1 young and older adults (who reported more recent experience with own than other-age faces) completed a match-to-sample task with young and older adult faces; only young adults showed an OAB. In Experiment 2 young and older adults completed an identity detection task in which we manipulated the identity strength of target and distracter identities by morphing each face with an average face in 20% steps. Accuracy increased with identity strength and facial age influenced older adults' (but not younger adults') strategy, but there was no evidence of an OAB. Collectively, these results suggest that the OAB depends on task demands and may be absent when searching for one identity.

  11. Face Recognition Method Based on Fuzzy 2DPCA

    Directory of Open Access Journals (Sweden)

    Xiaodong Li

    2014-01-01

    Full Text Available 2DPCA, which is one of the most important face recognition methods, is relatively sensitive to substantial variations in light direction, face pose, and facial expression. In order to improve the recognition performance of the traditional 2DPCA, a new 2DPCA algorithm based on the fuzzy theory is proposed in this paper, namely, the fuzzy 2DPCA (F2DPCA. In this method, applying fuzzy K-nearest neighbor (FKNN, the membership degree matrix of the training samples is calculated, which is used to get the fuzzy means of each class. The average of fuzzy means is then incorporated into the definition of the general scatter matrix with anticipation that it can improve classification result. The comprehensive experiments on the ORL, the YALE, and the FERET face database show that the proposed method can improve the classification rates and reduce the sensitivity to variations between face images caused by changes in illumination, face expression, and face pose.

  12. Fusion of visible and infrared imagery for face recognition

    Institute of Scientific and Technical Information of China (English)

    Xuerong Chen(陈雪荣); Zhongliang Jing(敬忠良); Shaoyuan Sun(孙韶媛); Gang Xiao(肖刚)

    2004-01-01

    In recent years face recognition has received substantial attention, but still remained very challenging in real applications. Despite the variety of approaches and tools studied, face recognition is not accurate or robust enough to be used in uncontrolled environments. Infrared (IR) imagery of human faces offers a promising alternative to visible imagery, however, IR has its own limitations. In this paper, a scheme to fuse information from the two modalities is proposed. The scheme is based on eigenfaces and probabilistic neural network (PNN), using fuzzy integral to fuse the objective evidence supplied by each modality. Recognition rate is used to evaluate the fusion scheme. Experimental results show that the scheme improves recognition performance substantially.

  13. Multimodal recognition based on face and ear using local feature

    Science.gov (United States)

    Yang, Ruyin; Mu, Zhichun; Chen, Long; Fan, Tingyu

    2017-06-01

    The pose issue which may cause loss of useful information has always been a bottleneck in face and ear recognition. To address this problem, we propose a multimodal recognition approach based on face and ear using local feature, which is robust to large facial pose variations in the unconstrained scene. Deep learning method is used for facial pose estimation, and the method of a well-trained Faster R-CNN is used to detect and segment the region of face and ear. Then we propose a weighted region-based recognition method to deal with the local feature. The proposed method has achieved state-of-the-art recognition performance especially when the images are affected by pose variations and random occlusion in unconstrained scene.

  14. Emotional states modulate the recognition potential during word processing.

    Science.gov (United States)

    Guo, Taomei; Chen, Min; Peng, Danling

    2012-01-01

    This study examined emotional modulation of word processing, showing that the recognition potential (RP), an ERP index of word recognition, could be modulated by different emotional states. In the experiment, participants were instructed to compete with pseudo-competitors, and via manipulation of the outcome of this competition, they were situated in neutral, highly positive, slightly positive, highly negative or slightly negative emotional states. They were subsequently asked to judge whether the referent of a word following a series of meaningless character segmentations was an animal or not. The emotional induction task and the word recognition task were alternated. Results showed that 1) compared with the neutral emotion condition, the peak latency of the RP under different emotional states was earlier and its mean amplitude was smaller, 2) there was no significant difference between RPs elicited under positive and negative emotional states in either the mean amplitude or latency, and 3) the RP was not affected by different degrees of positive emotional states. However, compared to slightly negative emotional states, the mean amplitude of the RP was smaller and its latency was shorter in highly negative emotional states over the left hemisphere but not over the right hemisphere. The results suggest that emotional states influence word processing.

  15. Featural processing in recognition of emotional facial expressions.

    Science.gov (United States)

    Beaudry, Olivia; Roy-Charland, Annie; Perron, Melanie; Cormier, Isabelle; Tapp, Roxane

    2014-04-01

    The present study aimed to clarify the role played by the eye/brow and mouth areas in the recognition of the six basic emotions. In Experiment 1, accuracy was examined while participants viewed partial and full facial expressions; in Experiment 2, participants viewed full facial expressions while their eye movements were recorded. Recognition rates were consistent with previous research: happiness was highest and fear was lowest. The mouth and eye/brow areas were not equally important for the recognition of all emotions. More precisely, while the mouth was revealed to be important in the recognition of happiness and the eye/brow area of sadness, results are not as consistent for the other emotions. In Experiment 2, consistent with previous studies, the eyes/brows were fixated for longer periods than the mouth for all emotions. Again, variations occurred as a function of the emotions, the mouth having an important role in happiness and the eyes/brows in sadness. The general pattern of results for the other four emotions was inconsistent between the experiments as well as across different measures. The complexity of the results suggests that the recognition process of emotional facial expressions cannot be reduced to a simple feature processing or holistic processing for all emotions.

  16. Localized versus Locality-Preserving Subspace Projections for Face Recognition

    Directory of Open Access Journals (Sweden)

    Iulian B. Ciocoiu

    2007-05-01

    Full Text Available Three different localized representation methods and a manifold learning approach to face recognition are compared in terms of recognition accuracy. The techniques under investigation are (a local nonnegative matrix factorization (LNMF; (b independent component analysis (ICA; (c NMF with sparse constraints (NMFsc; (d locality-preserving projections (Laplacian faces. A systematic comparative analysis is conducted in terms of distance metric used, number of selected features, and sources of variability on AR and Olivetti face databases. Results indicate that the relative ranking of the methods is highly task-dependent, and the performances vary significantly upon the distance metric used.

  17. Localized versus Locality-Preserving Subspace Projections for Face Recognition

    Directory of Open Access Journals (Sweden)

    Costin HaritonN

    2007-01-01

    Full Text Available Three different localized representation methods and a manifold learning approach to face recognition are compared in terms of recognition accuracy. The techniques under investigation are (a local nonnegative matrix factorization (LNMF; (b independent component analysis (ICA; (c NMF with sparse constraints (NMFsc; (d locality-preserving projections (Laplacian faces. A systematic comparative analysis is conducted in terms of distance metric used, number of selected features, and sources of variability on AR and Olivetti face databases. Results indicate that the relative ranking of the methods is highly task-dependent, and the performances vary significantly upon the distance metric used.

  18. Visual Afterimages of Emotional Faces in High Functioning Autism

    Science.gov (United States)

    Rutherford, M. D.; Troubridge, Erin K.; Walsh, Jennifer

    2012-01-01

    Fixating an emotional facial expression can create afterimages, such that subsequent faces are seen as having the opposite expression of that fixated. Visual afterimages have been used to map the relationships among emotion categories, and this method was used here to compare ASD and matched control participants. Participants adapted to a facial…

  19. A Multi-Modal Recognition System Using Face and Speech

    Directory of Open Access Journals (Sweden)

    Samir Akrouf

    2011-05-01

    Full Text Available Nowadays Person Recognition has got more and more interest especially for security reasons. The recognition performed by a biometric system using a single modality tends to be less performing due to sensor data, restricted degrees of freedom and unacceptable error rates. To alleviate some of these problems we use multimodal biometric systems which provide better recognition results. By combining different modalities, such us speech, face, fingerprint, etc., we increase the performance of recognition systems. In this paper, we study the fusion of speech and face in a recognition system for taking a final decision (i.e., accept or reject identity claim. We evaluate the performance of each system differently then we fuse the results and compare the performances.

  20. Sparse representation based face recognition using weighted regions

    Science.gov (United States)

    Bilgazyev, Emil; Yeniaras, E.; Uyanik, I.; Unan, Mahmut; Leiss, E. L.

    2013-12-01

    Face recognition is a challenging research topic, especially when the training (gallery) and recognition (probe) images are acquired using different cameras under varying conditions. Even a small noise or occlusion in the images can compromise the accuracy of recognition. Lately, sparse encoding based classification algorithms gave promising results for such uncontrollable scenarios. In this paper, we introduce a novel methodology by modeling the sparse encoding with weighted patches to increase the robustness of face recognition even further. In the training phase, we define a mask (i.e., weight matrix) using a sparse representation selecting the facial regions, and in the recognition phase, we perform comparison on selected facial regions. The algorithm was evaluated both quantitatively and qualitatively using two comprehensive surveillance facial image databases, i.e., SCfaceandMFPV, with the results clearly superior to common state-of-the-art methodologies in different scenarios.