WorldWideScience

Sample records for rhythmic facial expressions

  1. Facial Expression Recognition

    NARCIS (Netherlands)

    Pantic, Maja; Li, S.; Jain, A.

    2009-01-01

    Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial

  2. Facial expression and sarcasm.

    Science.gov (United States)

    Rockwell, P

    2001-08-01

    This study examined facial expression in the presentation of sarcasm. 60 responses (sarcastic responses = 30, nonsarcastic responses = 30) from 40 different speakers were coded by two trained coders. Expressions in three facial areas--eyebrow, eyes, and mouth--were evaluated. Only movement in the mouth area significantly differentiated ratings of sarcasm from nonsarcasm.

  3. Assessing Pain by Facial Expression: Facial Expression as Nexus

    OpenAIRE

    Kenneth M Prkachin

    2009-01-01

    The experience of pain is often represented by changes in facial expression. Evidence of pain that is available from facial expression has been the subject of considerable scientific investigation. The present paper reviews the history of pain assessment via facial expression in the context of a model of pain expression as a nexus connecting internal experience with social influence. Evidence about the structure of facial expressions of pain across the lifespan is reviewed. Applications of fa...

  4. [Prosopagnosia and facial expression recognition].

    Science.gov (United States)

    Koyama, Shinichi

    2014-04-01

    This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.

  5. Facial Asymmetry and Emotional Expression

    OpenAIRE

    Pickin, Andrew

    2011-01-01

    This report is about facial asymmetry, its connection to emotional expression, and methods of measuring facial asymmetry in videos of faces. The research was motivated by two factors: firstly, there was a real opportunity to develop a novel measure of asymmetry that required minimal human involvement and that improved on earlier measures in the literature; and secondly, the study of the relationship between facial asymmetry and emotional expression is both interesting in its own right, and im...

  6. Interaction between facial expression and color

    OpenAIRE

    Kae Nakajima; Tetsuto Minami; Shigeki Nakauchi

    2017-01-01

    Facial color varies depending on emotional state, and emotions are often described in relation to facial color. In this study, we investigated whether the recognition of facial expressions was affected by facial color and vice versa. In the facial expression task, expression morph continua were employed: fear-anger and sadness-happiness. The morphed faces were presented in three different facial colors (bluish, neutral, and reddish color). Participants identified a facial expression between t...

  7. Spontaneous Facial Mimicry in Response to Dynamic Facial Expressions

    Science.gov (United States)

    Sato, Wataru; Yoshikawa, Sakiko

    2007-01-01

    Based on previous neuroscientific evidence indicating activation of the mirror neuron system in response to dynamic facial actions, we hypothesized that facial mimicry would occur while subjects viewed dynamic facial expressions. To test this hypothesis, dynamic/static facial expressions of anger/happiness were presented using computer-morphing…

  8. iFace: Facial Expression Training System

    OpenAIRE

    Ito, Kyoko; Kurose, Hiroyuki; Takami, Ai; Nishida, Shogo

    2008-01-01

    In this study, a target facial expression selection interface for a facial expression training system and a facial expression training system were both proposed and developed. Twelve female dentists used the facial expression training system, and evaluations and opinions about the facial expression training system were obtained from these participants. In the future, we will attempt to improve both the target facial expression selection interface and the comparison of a current and a target f...

  9. Measuring facial expression of emotion.

    Science.gov (United States)

    Wolf, Karsten

    2015-12-01

    Research into emotions has increased in recent decades, especially on the subject of recognition of emotions. However, studies of the facial expressions of emotion were compromised by technical problems with visible video analysis and electromyography in experimental settings. These have only recently been overcome. There have been new developments in the field of automated computerized facial recognition; allowing real-time identification of facial expression in social environments. This review addresses three approaches to measuring facial expression of emotion and describes their specific contributions to understanding emotion in the healthy population and in persons with mental illness. Despite recent progress, studies on human emotions have been hindered by the lack of consensus on an emotion theory suited to examining the dynamic aspects of emotion and its expression. Studying expression of emotion in patients with mental health conditions for diagnostic and therapeutic purposes will profit from theoretical and methodological progress.

  10. Compound facial expressions of emotion

    National Research Council Canada - National Science Library

    Shichuan Du; Yong Tao; Aleix M. Martinez

    2014-01-01

    Understanding the different categories of facial expressions of emotion regularly used by us is essential to gain insights into human cognition and affect as well as for the design of computational...

  11. Abstract ance is the outward rhythmic expression of inner emotion ...

    African Journals Online (AJOL)

    Tracie1

    Nnamdi Azikiwe niversity Awka, Nigeria. Abstract ance is the outward rhythmic expression of inner emotion that ... dynamics. Many factors influence the African dance, and top among them is the diversity in culture. ... most African countries, music is an essential part of the people s daily life. The two basic and very important ...

  12. Cortical control of facial expression.

    Science.gov (United States)

    Müri, René M

    2016-06-01

    The present Review deals with the motor control of facial expressions in humans. Facial expressions are a central part of human communication. Emotional face expressions have a crucial role in human nonverbal behavior, allowing a rapid transfer of information between individuals. Facial expressions can be either voluntarily or emotionally controlled. Recent studies in nonhuman primates and humans have revealed that the motor control of facial expressions has a distributed neural representation. At least five cortical regions on the medial and lateral aspects of each hemisphere are involved: the primary motor cortex, the ventral lateral premotor cortex, the supplementary motor area on the medial wall, and the rostral and caudal cingulate cortex. The results of studies in humans and nonhuman primates suggest that the innervation of the face is bilaterally controlled for the upper part and mainly contralaterally controlled for the lower part. Furthermore, the primary motor cortex, the ventral lateral premotor cortex, and the supplementary motor area are essential for the voluntary control of facial expressions. In contrast, the cingulate cortical areas are important for emotional expression, because they receive input from different structures of the limbic system. © 2015 Wiley Periodicals, Inc.

  13. Analysis of Facial Expression by Taste Stimulation

    Science.gov (United States)

    Tobitani, Kensuke; Kato, Kunihito; Yamamoto, Kazuhiko

    In this study, we focused on the basic taste stimulation for the analysis of real facial expressions. We considered that the expressions caused by taste stimulation were unaffected by individuality or emotion, that is, such expressions were involuntary. We analyzed the movement of facial muscles by taste stimulation and compared real expressions with artificial expressions. From the result, we identified an obvious difference between real and artificial expressions. Thus, our method would be a new approach for facial expression recognition.

  14. Misrecognition of facial expressions in delinquents

    Directory of Open Access Journals (Sweden)

    Matsuura Naomi

    2009-09-01

    Full Text Available Abstract Background Previous reports have suggested impairment in facial expression recognition in delinquents, but controversy remains with respect to how such recognition is impaired. To address this issue, we investigated facial expression recognition in delinquents in detail. Methods We tested 24 male adolescent/young adult delinquents incarcerated in correctional facilities. We compared their performances with those of 24 age- and gender-matched control participants. Using standard photographs of facial expressions illustrating six basic emotions, participants matched each emotional facial expression with an appropriate verbal label. Results Delinquents were less accurate in the recognition of facial expressions that conveyed disgust than were control participants. The delinquents misrecognized the facial expressions of disgust as anger more frequently than did controls. Conclusion These results suggest that one of the underpinnings of delinquency might be impaired recognition of emotional facial expressions, with a specific bias toward interpreting disgusted expressions as hostile angry expressions.

  15. Rhythmic diel pattern of gene expression in juvenile maize leaf.

    Directory of Open Access Journals (Sweden)

    Maciej Jończyk

    Full Text Available BACKGROUND: Numerous biochemical and physiological parameters of living organisms follow a circadian rhythm. Although such rhythmic behavior is particularly pronounced in plants, which are strictly dependent on the daily photoperiod, data on the molecular aspects of the diurnal cycle in plants is scarce and mostly concerns the model species Arabidopsis thaliana. Here we studied the leaf transcriptome in seedlings of maize, an important C4 crop only distantly related to A. thaliana, throughout a cycle of 10 h darkness and 14 h light to look for rhythmic patterns of gene expression. RESULTS: Using DNA microarrays comprising ca. 43,000 maize-specific probes we found that ca. 12% of all genes showed clear-cut diel rhythms of expression. Cluster analysis identified 35 groups containing from four to ca. 1,000 genes, each comprising genes of similar expression patterns. Perhaps unexpectedly, the most pronounced and most common (concerning the highest number of genes expression maxima were observed towards and during the dark phase. Using Gene Ontology classification several meaningful functional associations were found among genes showing similar diel expression patterns, including massive induction of expression of genes related to gene expression, translation, protein modification and folding at dusk and night. Additionally, we found a clear-cut tendency among genes belonging to individual clusters to share defined transcription factor-binding sequences. CONCLUSIONS: Co-expressed genes belonging to individual clusters are likely to be regulated by common mechanisms. The nocturnal phase of the diurnal cycle involves gross induction of fundamental biochemical processes and should be studied more thoroughly than was appreciated in most earlier physiological studies. Although some general mechanisms responsible for the diel regulation of gene expression might be shared among plants, details of the diurnal regulation of gene expression seem to differ

  16. Recognizing Facial Expressions Automatically from Video

    Science.gov (United States)

    Shan, Caifeng; Braspenning, Ralph

    Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person's internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin [22] was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists [27]. Psychological studies [48, 3] suggest that facial expressions, as the main mode for nonverbal communication, play a vital role in human face-to-face communication. For illustration, we show some examples of facial expressions in Fig. 1.

  17. Mapping and Manipulating Facial Expression

    Science.gov (United States)

    Theobald, Barry-John; Matthews, Iain; Mangini, Michael; Spies, Jeffrey R.; Brick, Timothy R.; Cohn, Jeffrey F.; Boker, Steven M.

    2009-01-01

    Non-verbal visual cues accompany speech to supplement the meaning of spoken words, signify emotional state, indicate position in discourse, and provide back-channel feedback. This visual information includes head movements, facial expressions and body gestures. In this paper we describe techniques for manipulating both verbal and non-verbal facial gestures in video sequences of people engaged in conversation. We are developing a system for use in psychological experiments, where the effects of manipulating individual components of non-verbal visual behaviour during live face-to-face conversation can be studied. In particular, the techniques we describe operate in real-time at video frame-rate and the manipulation can be applied so both participants in a conversation are kept blind to the experimental conditions. PMID:19624037

  18. Recognition of 3D facial expression dynamics

    NARCIS (Netherlands)

    Sandbach, G.; Zafeiriou, S.; Pantic, Maja; Rueckert, D.

    2012-01-01

    In this paper we propose a method that exploits 3D motion-based features between frames of 3D facial geometry sequences for dynamic facial expression recognition. An expressive sequence is modelled to contain an onset followed by an apex and an offset. Feature selection methods are applied in order

  19. Facial expression recognition as a creative interface

    NARCIS (Netherlands)

    Valenti, R.; Jaimes, A.; Sebe, N.; Bradshaw, J.; Lieberman, H.; Staab, S.

    2008-01-01

    We present an audiovisual creativity tool that automatically recognizes facial expressions in real time, producing sounds in combination with images. The facial expression recognition component detects and tracks a face and outputs a feature vector of motions of specific locations in the face. The

  20. Mutual information-based facial expression recognition

    Science.gov (United States)

    Hazar, Mliki; Hammami, Mohamed; Hanêne, Ben-Abdallah

    2013-12-01

    This paper introduces a novel low-computation discriminative regions representation for expression analysis task. The proposed approach relies on interesting studies in psychology which show that most of the descriptive and responsible regions for facial expression are located around some face parts. The contributions of this work lie in the proposition of new approach which supports automatic facial expression recognition based on automatic regions selection. The regions selection step aims to select the descriptive regions responsible or facial expression and was performed using Mutual Information (MI) technique. For facial feature extraction, we have applied Local Binary Patterns Pattern (LBP) on Gradient image to encode salient micro-patterns of facial expressions. Experimental studies have shown that using discriminative regions provide better results than using the whole face regions whilst reducing features vector dimension.

  1. Human Facial Expressions as Adaptations:Evolutionary Questions in Facial Expression Research

    Science.gov (United States)

    SCHMIDT, KAREN L.; COHN, JEFFREY F.

    2007-01-01

    The importance of the face in social interaction and social intelligence is widely recognized in anthropology. Yet the adaptive functions of human facial expression remain largely unknown. An evolutionary model of human facial expression as behavioral adaptation can be constructed, given the current knowledge of the phenotypic variation, ecological contexts, and fitness consequences of facial behavior. Studies of facial expression are available, but results are not typically framed in an evolutionary perspective. This review identifies the relevant physical phenomena of facial expression and integrates the study of this behavior with the anthropological study of communication and sociality in general. Anthropological issues with relevance to the evolutionary study of facial expression include: facial expressions as coordinated, stereotyped behavioral phenotypes, the unique contexts and functions of different facial expressions, the relationship of facial expression to speech, the value of facial expressions as signals, and the relationship of facial expression to social intelligence in humans and in nonhuman primates. Human smiling is used as an example of adaptation, and testable hypotheses concerning the human smile, as well as other expressions, are proposed. PMID:11786989

  2. Facial expressions recognition with an emotion expressive robotic head

    Science.gov (United States)

    Doroftei, I.; Adascalitei, F.; Lefeber, D.; Vanderborght, B.; Doroftei, I. A.

    2016-08-01

    The purpose of this study is to present the preliminary steps in facial expressions recognition with a new version of an expressive social robotic head. So, in a first phase, our main goal was to reach a minimum level of emotional expressiveness in order to obtain nonverbal communication between the robot and human by building six basic facial expressions. To evaluate the facial expressions, the robot was used in some preliminary user studies, among children and adults.

  3. Do Facial Expressions Develop before Birth?

    Science.gov (United States)

    Reissland, Nadja; Francis, Brian; Mason, James; Lincoln, Karen

    2011-01-01

    Background Fetal facial development is essential not only for postnatal bonding between parents and child, but also theoretically for the study of the origins of affect. However, how such movements become coordinated is poorly understood. 4-D ultrasound visualisation allows an objective coding of fetal facial movements. Methodology/Findings Based on research using facial muscle movements to code recognisable facial expressions in adults and adapted for infants, we defined two distinct fetal facial movements, namely “cry-face-gestalt” and “laughter- gestalt,” both made up of up to 7 distinct facial movements. In this conceptual study, two healthy fetuses were then scanned at different gestational ages in the second and third trimester. We observed that the number and complexity of simultaneous movements increased with gestational age. Thus, between 24 and 35 weeks the mean number of co-occurrences of 3 or more facial movements increased from 7% to 69%. Recognisable facial expressions were also observed to develop. Between 24 and 35 weeks the number of co-occurrences of 3 or more movements making up a “cry-face gestalt” facial movement increased from 0% to 42%. Similarly the number of co-occurrences of 3 or more facial movements combining to a “laughter-face gestalt” increased from 0% to 35%. These changes over age were all highly significant. Significance This research provides the first evidence of developmental progression from individual unrelated facial movements toward fetal facial gestalts. We propose that there is considerable potential of this method for assessing fetal development: Subsequent discrimination of normal and abnormal fetal facial development might identify health problems in utero. PMID:21904607

  4. Hereditary family signature of facial expression.

    Science.gov (United States)

    Peleg, Gili; Katzir, Gadi; Peleg, Ofer; Kamara, Michal; Brodsky, Leonid; Hel-Or, Hagit; Keren, Daniel; Nevo, Eviatar

    2006-10-24

    Although facial expressions of emotion are universal, individual differences create a facial expression "signature" for each person; but, is there a unique family facial expression signature? Only a few family studies on the heredity of facial expressions have been performed, none of which compared the gestalt of movements in various emotional states; they compared only a few movements in one or two emotional states. No studies, to our knowledge, have compared movements of congenitally blind subjects with their relatives to our knowledge. Using two types of analyses, we show a correlation between movements of congenitally blind subjects with those of their relatives in think-concentrate, sadness, anger, disgust, joy, and surprise and provide evidence for a unique family facial expression signature. In the analysis "in-out family test," a particular movement was compared each time across subjects. Results show that the frequency of occurrence of a movement of a congenitally blind subject in his family is significantly higher than that outside of his family in think-concentrate, sadness, and anger. In the analysis "the classification test," in which congenitally blind subjects were classified to their families according to the gestalt of movements, results show 80% correct classification over the entire interview and 75% in anger. Analysis of the movements' frequencies in anger revealed a correlation between the movements' frequencies of congenitally blind individuals and those of their relatives. This study anticipates discovering genes that influence facial expressions, understanding their evolutionary significance, and elucidating repair mechanisms for syndromes lacking facial expression, such as autism.

  5. Physical Aggression and Facial Expression Identification

    Directory of Open Access Journals (Sweden)

    Alisdair James Gordon Taylor

    2014-11-01

    Full Text Available Social information processing theories suggest that aggressive individuals may exhibit hostile perceptual biases when interpreting other’s behaviour. This hypothesis was tested in the present study which investigated the effects of physical aggression on facial expression identification in a sample of healthy participants. Participants were asked to judge the expressions of faces presented to them and to complete a self-report measure of aggression. Relative to low physically aggressive participants, high physically aggressive participants were more likely to mistake non-angry facial expressions as being angry facial expressions (misattribution errors, supporting the idea of a hostile predisposition. These differences were not explained by gender, or response times. There were no differences in identifying angry expressions in general between aggression groups (misperceived errors. These findings add support to the idea that aggressive individuals exhibit hostile perceptual biases when interpreting facial expressions.

  6. Facial Expression at Retrieval Affects Recognition of Facial Identity

    Directory of Open Access Journals (Sweden)

    Wenfeng eChen

    2015-06-01

    Full Text Available It is well known that memory can be modulated by emotional stimuli at the time of encoding and consolidation. For example, happy faces create better identity recognition than faces with certain other expressions. However, the influence of facial expression at the time of retrieval remains unknown in the literature. To separate the potential influence of expression at retrieval from its effects at earlier stages, we had participants learn neutral faces but manipulated facial expression at the time of memory retrieval in a standard old/new recognition task. The results showed a clear effect of facial expression, where happy test faces were identified more successfully than angry test faces. This effect is unlikely due to greater image similarity between the neutral learning face and the happy test face, because image analysis showed that the happy test faces are in fact less similar to the neutral learning faces relative to the angry test faces. In the second experiment, we investigated whether this emotional effect is influenced by the expression at the time of learning. We employed angry or happy faces as learning stimuli, and angry, happy, and neutral faces as test stimuli. The results showed that the emotional effect at retrieval is robust across different encoding conditions with happy or angry expressions. These findings indicate that emotional expressions affect the retrieval process in identity recognition, and identity recognition does not rely on emotional association between learning and test faces.

  7. Wavelet based approach for facial expression recognition

    Directory of Open Access Journals (Sweden)

    Zaenal Abidin

    2015-03-01

    Full Text Available Facial expression recognition is one of the most active fields of research. Many facial expression recognition methods have been developed and implemented. Neural networks (NNs have capability to undertake such pattern recognition tasks. The key factor of the use of NN is based on its characteristics. It is capable in conducting learning and generalizing, non-linear mapping, and parallel computation. Backpropagation neural networks (BPNNs are the approach methods that mostly used. In this study, BPNNs were used as classifier to categorize facial expression images into seven-class of expressions which are anger, disgust, fear, happiness, sadness, neutral and surprise. For the purpose of feature extraction tasks, three discrete wavelet transforms were used to decompose images, namely Haar wavelet, Daubechies (4 wavelet and Coiflet (1 wavelet. To analyze the proposed method, a facial expression recognition system was built. The proposed method was tested on static images from JAFFE database.

  8. What do facial expressions of emotion express in young children? The relationship between facial display and EMG measures

    OpenAIRE

    Michela Balconi; Giovanni Lecci; Verdiana Trapletti

    2014-01-01

    The present paper explored the relationship between emotional facial response and electromyographic modulation in children when they observe facial expression of emotions. Facial responsiveness (evaluated by arousal and valence ratings) and psychophysiological correlates (facial electromyography, EMG) were analyzed when children looked at six facial expressions of emotions (happiness, anger, fear, sadness, surprise and disgust). About EMG measure, corrugator and zygomatic muscle activity was ...

  9. The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

    Science.gov (United States)

    Kaulard, Kathrin; Cunningham, Douglas W.; Bülthoff, Heinrich H.; Wallraven, Christian

    2012-01-01

    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions

  10. The MPI facial expression database--a validated database of emotional and conversational facial expressions.

    Directory of Open Access Journals (Sweden)

    Kathrin Kaulard

    Full Text Available The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision to investigate the processing of a wider range of natural

  11. Sad Facial Expressions Increase Choice Blindness

    Directory of Open Access Journals (Sweden)

    Yajie Wang

    2018-01-01

    Full Text Available Previous studies have discovered a fascinating phenomenon known as choice blindness—individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1 and faces with happy and neutral expressions (Experiment 2 in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral expressions also tended to show a lower detection rate of sad (as compared to neutral faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions.

  12. Facial displays, emotional expressions and conversational acts

    NARCIS (Netherlands)

    Heylen, Dirk K.J.; Trappl, R.

    2006-01-01

    “Emotional expression is multifaceted – expression is determined both by a person’s reaction to an event and by the attempt to manipulate this expression for strategic reasons in social interaction.��? (Scherer, 2001). In this paper we present some thoughts on the relation between emotion, facial

  13. A study on facial expressions recognition

    Science.gov (United States)

    Xu, Jingjing

    2017-09-01

    In terms of communication, postures and facial expressions of such feelings like happiness, anger and sadness play important roles in conveying information. With the development of the technology, recently a number of algorithms dealing with face alignment, face landmark detection, classification, facial landmark localization and pose estimation have been put forward. However, there are a lot of challenges and problems need to be fixed. In this paper, a few technologies have been concluded and analyzed, and they all relate to handling facial expressions recognition and poses like pose-indexed based multi-view method for face alignment, robust facial landmark detection under significant head pose and occlusion, partitioning the input domain for classification, robust statistics face formalization.

  14. Facial mimicry is not necessary to recognize emotion: Facial expression recognition by people with Moebius syndrome.

    Science.gov (United States)

    Rives Bogart, Kathleen; Matsumoto, David

    2010-01-01

    According to the reverse simulation model of embodied simulation theory, we recognize others' emotions by subtly mimicking their expressions, which allows us to feel the corresponding emotion through facial feedback. Previous studies examining whether facial mimicry is necessary for facial expression recognition were limited by potentially distracting manipulations intended to artificially restrict facial mimicry or very small samples of people with facial paralysis. We addressed these limitations by collecting the largest sample to date of people with Moebius syndrome, a condition characterized by congenital bilateral facial paralysis. In this Internet-based study, 37 adults with Moebius syndrome and 37 matched control participants completed a facial expression recognition task. People with Moebius syndrome did not differ from the control group or normative data in emotion recognition accuracy, and accuracy was not related to extent of ability to produce facial expressions. Our results do not support the hypothesis that reverse simulation with facial mimicry is necessary for facial expression recognition.

  15. Biased Facial Expression Interpretation in Shy Children

    Science.gov (United States)

    Kokin, Jessica; Younger, Alastair; Gosselin, Pierre; Vaillancourt, Tracy

    2016-01-01

    The relationship between shyness and the interpretations of the facial expressions of others was examined in a sample of 123 children aged 12 to 14?years. Participants viewed faces displaying happiness, fear, anger, disgust, sadness, surprise, as well as a neutral expression, presented on a computer screen. The children identified each expression…

  16. A comparison of facial expression properties in five hylobatid species

    NARCIS (Netherlands)

    Scheider, Linda; Liebal, Katja; Oña, Leonardo; Burrows, Anne; Waller, Bridget

    2014-01-01

    Little is known about facial communication of lesser apes (family Hylobatidae) and how their facial expressions (and use of) relate to social organization. We investigated facial expressions (defined as combinations of facial movements) in social interactions of mated pairs in five different

  17. Stereoscopy Amplifies Emotions Elicited by Facial Expressions

    Directory of Open Access Journals (Sweden)

    Jussi Hakala

    2015-11-01

    Full Text Available Mediated facial expressions do not elicit emotions as strongly as real-life facial expressions, possibly due to the low fidelity of pictorial presentations in typical mediation technologies. In the present study, we investigated the extent to which stereoscopy amplifies emotions elicited by images of neutral, angry, and happy facial expressions. The emotional self-reports of positive and negative valence (which were evaluated separately and arousal of 40 participants were recorded. The magnitude of perceived depth in the stereoscopic images was manipulated by varying the camera base at 15, 40, 65, 90, and 115 mm. The analyses controlled for participants’ gender, gender match, emotional empathy, and trait alexithymia. The results indicated that stereoscopy significantly amplified the negative valence and arousal elicited by angry expressions at the most natural (65 mm camera base, whereas stereoscopy amplified the positive valence elicited by happy expressions in both the narrowed and most natural (15–65 mm base conditions. Overall, the results indicate that stereoscopy amplifies the emotions elicited by mediated emotional facial expressions when the depth geometry is close to natural. The findings highlight the sensitivity of the visual system to depth and its effect on emotions.

  18. Emotional Empathy and Facial Mimicry for Static and Dynamic Facial Expressions of Fear and Disgust

    Directory of Open Access Journals (Sweden)

    Krystyna Rymarczyk

    2016-11-01

    Full Text Available Facial mimicry is the tendency to imitate the emotional facial expressions of others. Increasing evidence suggests that the perception of dynamic displays leads to enhanced facial mimicry, especially for happiness and anger. However, little is known about the impact of dynamic stimuli on facial mimicry for fear and disgust. To investigate this issue, facial EMG responses were recorded in the corrugator supercilii, levator labii and lateral frontalis muscles, while participants viewed static (photos and dynamic (videos facial emotional expressions. Moreover, we tested whether emotional empathy modulated facial mimicry for emotional facial expressions.In accordance with our predictions, the highly empathic group responded with larger activity in the corrugator supercilii and levator labii muscles. Moreover, dynamic compared to static facial expressions of fear revealed enhanced mimicry in the high-empathic group in the frontalis and corrugator supercilii muscles. In the low-empathic group the facial reactions were not differentiated between fear and disgust for both dynamic and static facial expressions.We conclude that highly empathic subjects are more sensitive in their facial reactions to the facial expressions of fear and disgust compared to low empathetic counterparts. Our data confirms that personal characteristics, i.e. empathy traits as well as modality of the presented stimuli, modulate the strength of facial mimicry. In addition, measures of EMG activity of the levator labii and frontalis muscles may be a useful index of empathic responses of fear and disgust.

  19. A Robot with Complex Facial Expressions

    Directory of Open Access Journals (Sweden)

    J. Takeno

    2009-08-01

    Full Text Available The authors believe that the consciousness of humans basically originates from languages and their association-like flow of consciousness, and that feelings are generated accompanying respective languages. We incorporated artificial consciousness into a robot; achieved an association flow of language like flow of consciousness; and developed a robot called Kansei that expresses its feelings according to the associations occurring in the robot. To be able to fully communicate with humans, robots must be able to display complex expressions, such as a sense of being thrilled. We therefore added to the Kansei robot a device to express complex feelings through its facial expressions. The Kansei robot is actually an artificial skull made of aluminum, with servomotors built into it. The face is made of relatively soft polyethylene, which is formed to appear like a human face. Facial expressions are generated using 19 servomotors built into the skull, which pull metal wires attached to the facial “skin” to create expressions. The robot at present is capable of making six basic expressions as well as complex expressions, such as happiness and fear combined.

  20. Stability of Facial Affective Expressions in Schizophrenia

    Directory of Open Access Journals (Sweden)

    H. Fatouros-Bergman

    2012-01-01

    Full Text Available Thirty-two videorecorded interviews were conducted by two interviewers with eight patients diagnosed with schizophrenia. Each patient was interviewed four times: three weekly interviews by the first interviewer and one additional interview by the second interviewer. 64 selected sequences where the patients were speaking about psychotic experiences were scored for facial affective behaviour with Emotion Facial Action Coding System (EMFACS. In accordance with previous research, the results show that patients diagnosed with schizophrenia express negative facial affectivity. Facial affective behaviour seems not to be dependent on temporality, since within-subjects ANOVA revealed no substantial changes in the amount of affects displayed across the weekly interview occasions. Whereas previous findings found contempt to be the most frequent affect in patients, in the present material disgust was as common, but depended on the interviewer. The results suggest that facial affectivity in these patients is primarily dominated by the negative emotions of disgust and, to a lesser extent, contempt and implies that this seems to be a fairly stable feature.

  1. Colour Perception on Facial Expression towards Emotion

    OpenAIRE

    Kim Mey Chew; Rubita Sudirman; Ching Yee Yong

    2012-01-01

    This study is to investigate human perceptions on pairing of facial expressions of emotion with colours. A group of 27 subjects consisting mainly of younger and Malaysian had participated in this study. For each of the seven faces, which expresses the basic emotions neutral, happiness, surprise, anger, disgust, fear and sadness, a single colour is chosen from the eight basic colours for the “match” of best visual look to the face accordingly. The different emotions appear well characterized b...

  2. Categorical Perception of Affective and Linguistic Facial Expressions

    Science.gov (United States)

    McCullough, Stephen; Emmorey, Karen

    2009-01-01

    Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX…

  3. Violent Media Consumption and the Recognition of Dynamic Facial Expressions

    Science.gov (United States)

    Kirsh, Steven J.; Mounts, Jeffrey R. W.; Olczak, Paul V.

    2006-01-01

    This study assessed the speed of recognition of facial emotional expressions (happy and angry) as a function of violent media consumption. Color photos of calm facial expressions morphed to either an angry or a happy facial expression. Participants were asked to make a speeded identification of the emotion (happiness or anger) during the morph.…

  4. Facial expression (mood) recognition from facial images using committee neural networks

    OpenAIRE

    Kulkarni, Saket S; Reddy, Narender P; Hariharan, SI

    2009-01-01

    Abstract Background Facial expressions are important in facilitating human communication and interactions. Also, they are used as an important tool in behavioural studies and in medical rehabilitation. Facial image based mood detection techniques may provide a fast and practical approach for non-invasive mood detection. The purpose of the present study was to develop an intelligent system for facial image based expression classification using committee neural networks. Methods Several facial ...

  5. Time perception and dynamics of facial expressions of emotions.

    Directory of Open Access Journals (Sweden)

    Sophie L Fayolle

    Full Text Available Two experiments were run to examine the effects of dynamic displays of facial expressions of emotions on time judgments. The participants were given a temporal bisection task with emotional facial expressions presented in a dynamic or a static display. Two emotional facial expressions and a neutral expression were tested and compared. Each of the emotional expressions had the same affective valence (unpleasant, but one was high-arousing (expressing anger and the other low-arousing (expressing sadness. Our results showed that time judgments are highly sensitive to movements in facial expressions and the emotions expressed. Indeed, longer perceived durations were found in response to the dynamic faces and the high-arousing emotional expressions compared to the static faces and low-arousing expressions. In addition, the facial movements amplified the effect of emotions on time perception. Dynamic facial expressions are thus interesting tools for examining variations in temporal judgments in different social contexts.

  6. Time perception and dynamics of facial expressions of emotions.

    Science.gov (United States)

    Fayolle, Sophie L; Droit-Volet, Sylvie

    2014-01-01

    Two experiments were run to examine the effects of dynamic displays of facial expressions of emotions on time judgments. The participants were given a temporal bisection task with emotional facial expressions presented in a dynamic or a static display. Two emotional facial expressions and a neutral expression were tested and compared. Each of the emotional expressions had the same affective valence (unpleasant), but one was high-arousing (expressing anger) and the other low-arousing (expressing sadness). Our results showed that time judgments are highly sensitive to movements in facial expressions and the emotions expressed. Indeed, longer perceived durations were found in response to the dynamic faces and the high-arousing emotional expressions compared to the static faces and low-arousing expressions. In addition, the facial movements amplified the effect of emotions on time perception. Dynamic facial expressions are thus interesting tools for examining variations in temporal judgments in different social contexts.

  7. Judgments of subtle facial expressions of emotion.

    Science.gov (United States)

    Matsumoto, David; Hwang, Hyisung C

    2014-04-01

    Most studies on judgments of facial expressions of emotion have primarily utilized prototypical, high-intensity expressions. This paper examines judgments of subtle facial expressions of emotion, including not only low-intensity versions of full-face prototypes but also variants of those prototypes. A dynamic paradigm was used in which observers were shown a neutral expression followed by the target expression to judge, and then the neutral expression again, allowing for a simulation of the emergence of the expression from and then return to a baseline. We also examined how signal and intensity clarities of the expressions (explained more fully in the Introduction) were associated with judgment agreement levels. Low-intensity, full-face prototypical expressions of emotion were judged as the intended emotion at rates significantly greater than chance. A number of the proposed variants were also judged as the intended emotions. Both signal and intensity clarities were individually associated with agreement rates; when their interrelationships were taken into account, signal clarity independently predicted agreement rates but intensity clarity did not. The presence or absence of specific muscles appeared to be more important to agreement rates than their intensity levels, with the exception of the intensity of zygomatic major, which was positively correlated with agreement rates for judgments of joy.

  8. Robust Feature Detection for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Spiros Ioannou

    2007-07-01

    Full Text Available This paper presents a robust and adaptable facial feature extraction system used for facial expression recognition in human-computer interaction (HCI environments. Such environments are usually uncontrolled in terms of lighting and color quality, as well as human expressivity and movement; as a result, using a single feature extraction technique may fail in some parts of a video sequence, while performing well in others. The proposed system is based on a multicue feature extraction and fusion technique, which provides MPEG-4-compatible features assorted with a confidence measure. This confidence measure is used to pinpoint cases where detection of individual features may be wrong and reduce their contribution to the training phase or their importance in deducing the observed facial expression, while the fusion process ensures that the final result regarding the features will be based on the extraction technique that performed better given the particular lighting or color conditions. Real data and results are presented, involving both extreme and intermediate expression/emotional states, obtained within the sensitive artificial listener HCI environment that was generated in the framework of related European projects.

  9. Automatic recognition of emotions from facial expressions

    Science.gov (United States)

    Xue, Henry; Gertner, Izidor

    2014-06-01

    In the human-computer interaction (HCI) process it is desirable to have an artificial intelligent (AI) system that can identify and categorize human emotions from facial expressions. Such systems can be used in security, in entertainment industries, and also to study visual perception, social interactions and disorders (e.g. schizophrenia and autism). In this work we survey and compare the performance of different feature extraction algorithms and classification schemes. We introduce a faster feature extraction method that resizes and applies a set of filters to the data images without sacrificing the accuracy. In addition, we have enhanced SVM to multiple dimensions while retaining the high accuracy rate of SVM. The algorithms were tested using the Japanese Female Facial Expression (JAFFE) Database and the Database of Faces (AT&T Faces).

  10. Reproducibility of the dynamics of facial expressions in unilateral facial palsy.

    Science.gov (United States)

    Alagha, M A; Ju, X; Morley, S; Ayoub, A

    2018-02-01

    The aim of this study was to assess the reproducibility of non-verbal facial expressions in unilateral facial paralysis using dynamic four-dimensional (4D) imaging. The Di4D system was used to record five facial expressions of 20 adult patients. The system captured 60 three-dimensional (3D) images per second; each facial expression took 3-4seconds which was recorded in real time. Thus a set of 180 3D facial images was generated for each expression. The procedure was repeated after 30min to assess the reproducibility of the expressions. A mathematical facial mesh consisting of thousands of quasi-point 'vertices' was conformed to the face in order to determine the morphological characteristics in a comprehensive manner. The vertices were tracked throughout the sequence of the 180 images. Five key 3D facial frames from each sequence of images were analyzed. Comparisons were made between the first and second capture of each facial expression to assess the reproducibility of facial movements. Corresponding images were aligned using partial Procrustes analysis, and the root mean square distance between them was calculated and analyzed statistically (paired Student t-test, P<0.05). Facial expressions of lip purse, cheek puff, and raising of eyebrows were reproducible. Facial expressions of maximum smile and forceful eye closure were not reproducible. The limited coordination of various groups of facial muscles contributed to the lack of reproducibility of these facial expressions. 4D imaging is a useful clinical tool for the assessment of facial expressions. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  11. Meta-Analysis of the First Facial Expression Recognition Challenge

    NARCIS (Netherlands)

    Valstar, M.F.; Mehu, M.; Jiang, Bihan; Pantic, Maja; Scherer, K.

    Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability

  12. Influence of facial expression on memory for facial identity: effects of visual features or emotional meaning?

    Science.gov (United States)

    D'Argembeau, Arnaud; Van der Linden, Martial

    2011-02-01

    Research has shown that neutral faces are better recognized when they had been presented with happy rather than angry expressions at study, suggesting that emotional signals conveyed by facial expressions influenced the encoding of novel facial identities in memory. An alternative explanation, however, would be that the influence of facial expression resulted from differences in the visual features of the expressions employed. In this study, this possibility was tested by manipulating facial expression at study versus test. In line with earlier studies, we found that neutral faces were better recognized when they had been previously encountered with happy rather than angry expressions. On the other hand, when neutral faces were presented at study and participants were later asked to recognize happy or angry faces of the same individuals, no influence of facial expression was detected. As the two experimental conditions involved exactly the same amount of changes in the visual features of the stimuli between study and test, the results cannot be simply explained by differences in the visual properties of different facial expressions and may instead reside in their specific emotional meaning. The findings further suggest that the influence of facial expression is due to disruptive effects of angry expressions rather than facilitative effects of happy expressions. This study thus provides additional evidence that facial identity and facial expression are not processed completely independently. PsycINFO Database Record (c) 2011 APA, all rights reserved.

  13. The Relationships between Processing Facial Identity, Emotional Expression, Facial Speech, and Gaze Direction during Development

    Science.gov (United States)

    Spangler, Sibylle M.; Schwarzer, Gudrun; Korell, Monika; Maier-Karius, Johanna

    2010-01-01

    Four experiments were conducted with 5- to 11-year-olds and adults to investigate whether facial identity, facial speech, emotional expression, and gaze direction are processed independently of or in interaction with one another. In a computer-based, speeded sorting task, participants sorted faces according to facial identity while disregarding…

  14. Automatic facial expression tracking for 4D range scans

    OpenAIRE

    Xiang, G.; Ju, X.; Holt, P

    2010-01-01

    This paper presents a fully automatic approach of spatio-temporal facial expression tracking for 4D range scans without any manual interventions (such as specifying landmarks). The approach consists of three steps: rigid registration, facial model reconstruction, and facial expression tracking. A Scaling Iterative Closest Points (SICP) algorithm is introduced to compute the optimal rigid registration between a template facial model and a range scan with consideration of the scale problem. A d...

  15. Facial expression decoding in early Parkinson's disease.

    Science.gov (United States)

    Pell, Marc D; Leonard, Carol L

    2005-05-01

    The ability to derive emotional and non-emotional information from unfamiliar, static faces was evaluated in 21 adults with idiopathic Parkinson's disease (PD) and 21 healthy control subjects. Participants' sensitivity to emotional expressions was comprehensively assessed in tasks of discrimination, identification, and rating of five basic emotions: happiness, (pleasant) surprise, anger, disgust, and sadness. Subjects also discriminated and identified faces according to underlying phonemic ("facial speech") cues and completed a neuropsychological test battery. Results uncovered limited evidence that the processing of emotional faces differed between the two groups in our various conditions, adding to recent arguments that these skills are frequently intact in non-demented adults with PD [R. Adolphs, R. Schul, D. Tranel, Intact recognition of facial emotion in Parkinson's disease, Neuropsychology 12 (1998) 253-258]. Patients could also accurately interpret facial speech cues and discriminate the identity of unfamiliar faces in a normal manner. There were some indications that basal ganglia pathology in PD contributed to selective difficulties recognizing facial expressions of disgust, consistent with a growing literature on this topic. Collectively, findings argue that abnormalities for face processing are not a consistent or generalized feature of medicated adults with mild-moderate PD, prompting discussion of issues that may be contributing to heterogeneity within this literature. Our results imply a more limited role for the basal ganglia in the processing of emotion from static faces relative to speech prosody, for which the same PD patients exhibited pronounced deficits in a parallel set of tasks [M.D. Pell, C. Leonard, Processing emotional tone from speech in Parkinson's disease: a role for the basal ganglia, Cogn. Affect. Behav. Neurosci. 3 (2003) 275-288]. These diverging patterns allow for the possibility that basal ganglia mechanisms are more engaged by

  16. Perceptually Valid Facial Expressions for Character-Based Applications

    Directory of Open Access Journals (Sweden)

    Ali Arya

    2009-01-01

    Full Text Available This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”

  17. Development of the Korean Facial Emotion Stimuli: Korea University Facial Expression Collection 2nd Edition.

    Science.gov (United States)

    Kim, Sun-Min; Kwon, Ye-Jin; Jung, Soo-Yun; Kim, Min-Ji; Cho, Yang Seok; Kim, Hyun Taek; Nam, Ki-Chun; Kim, Hackjin; Choi, Kee-Hong; Choi, June-Seek

    2017-01-01

    Background: Developing valid emotional facial stimuli for specific ethnicities creates ample opportunities to investigate both the nature of emotional facial information processing in general and clinical populations as well as the underlying mechanisms of facial emotion processing within and across cultures. Given that most entries in emotional facial stimuli databases were developed with western samples, and given that very few of the eastern emotional facial stimuli sets were based strictly on the Ekman's Facial Action Coding System, developing valid emotional facial stimuli of eastern samples remains a high priority. Aims: To develop and examine the psychometric properties of six basic emotional facial stimuli recruiting professional Korean actors and actresses based on the Ekman's Facial Action Coding System for the Korea University Facial Expression Collection-Second Edition (KUFEC-II). Materials And Methods: Stimulus selection was done in two phases. First, researchers evaluated the clarity and intensity of each stimulus developed based on the Facial Action Coding System. Second, researchers selected a total of 399 stimuli from a total of 57 actors and actresses, which were then rated on accuracy, intensity, valence, and arousal by 75 independent raters. Conclusion: The hit rates between the targeted and rated expressions of the KUFEC-II were all above 80%, except for fear (50%) and disgust (63%). The KUFEC-II appears to be a valid emotional facial stimuli database, providing the largest set of emotional facial stimuli. The mean intensity score was 5.63 (out of 7), suggesting that the stimuli delivered the targeted emotions with great intensity. All positive expressions were rated as having a high positive valence, whereas all negative expressions were rated as having a high negative valence. The KUFEC II is expected to be widely used in various psychological studies on emotional facial expression. KUFEC-II stimuli can be obtained through contacting the

  18. Development of the Korean Facial Emotion Stimuli: Korea University Facial Expression Collection 2nd Edition

    Science.gov (United States)

    Kim, Sun-Min; Kwon, Ye-Jin; Jung, Soo-Yun; Kim, Min-Ji; Cho, Yang Seok; Kim, Hyun Taek; Nam, Ki-Chun; Kim, Hackjin; Choi, Kee-Hong; Choi, June-Seek

    2017-01-01

    Background: Developing valid emotional facial stimuli for specific ethnicities creates ample opportunities to investigate both the nature of emotional facial information processing in general and clinical populations as well as the underlying mechanisms of facial emotion processing within and across cultures. Given that most entries in emotional facial stimuli databases were developed with western samples, and given that very few of the eastern emotional facial stimuli sets were based strictly on the Ekman’s Facial Action Coding System, developing valid emotional facial stimuli of eastern samples remains a high priority. Aims: To develop and examine the psychometric properties of six basic emotional facial stimuli recruiting professional Korean actors and actresses based on the Ekman’s Facial Action Coding System for the Korea University Facial Expression Collection-Second Edition (KUFEC-II). Materials And Methods: Stimulus selection was done in two phases. First, researchers evaluated the clarity and intensity of each stimulus developed based on the Facial Action Coding System. Second, researchers selected a total of 399 stimuli from a total of 57 actors and actresses, which were then rated on accuracy, intensity, valence, and arousal by 75 independent raters. Conclusion: The hit rates between the targeted and rated expressions of the KUFEC-II were all above 80%, except for fear (50%) and disgust (63%). The KUFEC-II appears to be a valid emotional facial stimuli database, providing the largest set of emotional facial stimuli. The mean intensity score was 5.63 (out of 7), suggesting that the stimuli delivered the targeted emotions with great intensity. All positive expressions were rated as having a high positive valence, whereas all negative expressions were rated as having a high negative valence. The KUFEC II is expected to be widely used in various psychological studies on emotional facial expression. KUFEC-II stimuli can be obtained through contacting the

  19. Development of the Korean Facial Emotion Stimuli: Korea University Facial Expression Collection 2nd Edition

    Directory of Open Access Journals (Sweden)

    Sun-Min Kim

    2017-05-01

    Full Text Available Background: Developing valid emotional facial stimuli for specific ethnicities creates ample opportunities to investigate both the nature of emotional facial information processing in general and clinical populations as well as the underlying mechanisms of facial emotion processing within and across cultures. Given that most entries in emotional facial stimuli databases were developed with western samples, and given that very few of the eastern emotional facial stimuli sets were based strictly on the Ekman’s Facial Action Coding System, developing valid emotional facial stimuli of eastern samples remains a high priority.Aims: To develop and examine the psychometric properties of six basic emotional facial stimuli recruiting professional Korean actors and actresses based on the Ekman’s Facial Action Coding System for the Korea University Facial Expression Collection-Second Edition (KUFEC-II.Materials And Methods: Stimulus selection was done in two phases. First, researchers evaluated the clarity and intensity of each stimulus developed based on the Facial Action Coding System. Second, researchers selected a total of 399 stimuli from a total of 57 actors and actresses, which were then rated on accuracy, intensity, valence, and arousal by 75 independent raters.Conclusion: The hit rates between the targeted and rated expressions of the KUFEC-II were all above 80%, except for fear (50% and disgust (63%. The KUFEC-II appears to be a valid emotional facial stimuli database, providing the largest set of emotional facial stimuli. The mean intensity score was 5.63 (out of 7, suggesting that the stimuli delivered the targeted emotions with great intensity. All positive expressions were rated as having a high positive valence, whereas all negative expressions were rated as having a high negative valence. The KUFEC II is expected to be widely used in various psychological studies on emotional facial expression. KUFEC-II stimuli can be obtained through

  20. Deficits in the Mimicry of Facial Expressions in Parkinson's Disease

    OpenAIRE

    Livingstone, Steven R.; Vezer, Esztella; Lucy M. McGarry; Lang, Anthony E.; Frank A. Russo

    2016-01-01

    Background: Humans spontaneously mimic the facial expressions of others, facilitating social interaction. This mimicking behavior may be impaired in individuals with Parkinson's disease, for whom the loss of facial movements is a clinical feature. Objective: To assess the presence of facial mimicry in patients with Parkinson's disease. Method: Twenty-seven non-depressed patients with idiopathic Parkinson's disease and 28 age-matched controls had their facial muscles recorded with electr...

  1. Towards Real-Life Facial Expression Recognition Systems

    Directory of Open Access Journals (Sweden)

    BENTA, K.-I.

    2015-05-01

    Full Text Available Facial expressions are a set of symbols of great importance for human-to-human communication. Spontaneous in their nature, diverse and personal, facial expressions demand for real-time, complex, robust and adaptable facial expression recognition (FER systems to facilitate the human-computer interaction. The last years' research efforts in the recognition of facial expressions are preparing FER systems to step into the real-life. In order to meet the before-mentioned requirements, this article surveys the work in FER since 2008, particularly adopting the discrete states emotion model in a quest for the most valuable FER work/systems. We first present the new spontaneous facial expression databases and then organize the real-time FER solutions grouped by spontaneous and posed facial expression databases. Then automatic FERs are compared and the cross-database validation method is presented. Finally, we outline FER system open issues to meet real-life challenges.

  2. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    Science.gov (United States)

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  3. The neural mechanism of imagining facial affective expression.

    Science.gov (United States)

    Kim, Sung-Eun; Kim, Ji-Woong; Kim, Jae-Jin; Jeong, Bum Seok; Choi, Eun Ae; Jeong, Young-Gil; Kim, Ji Hyung; Ku, Jeonghun; Ki, Seon Wan

    2007-05-11

    To react appropriately in social relationships, we have a tendency to simulate how others think of us through mental imagery. In particular, simulating other people's facial affective expressions through imagery in social situations enables us to enact vivid affective responses, which may be inducible from other people's affective responses that are predicted as results of our mental imagery of future behaviors. Therefore, this ability is an important cognitive feature of diverse advanced social cognition in humans. We used functional magnetic imaging to examine brain activation during the imagery of emotional facial expressions as compared to neutral facial expressions. Twenty-one right-handed subjects participated in this study. We observed the activation of the amygdala during the imagining of emotional facial affect versus the imagining of neutral facial affects. In addition, we also observed the activation of several areas of the brain, including the dorsolateral prefrontal cortex, ventral premotor cortex, superior temporal sulcus, parahippocampal gyrus, lingual gyrus, and the midbrain. Our results suggest that the areas of the brain known to be involved in the actual perception of affective facial expressions are also implicated in the imagery of affective facial expressions. In particular, given that the processing of information concerning the facial patterning of different emotions and the enactment of behavioral responses, such as autonomic arousal, are central components of the imagery of emotional facial expressions, we postulate the central role of the amygdala in the imagery of emotional facial expressions.

  4. Facial Expression Generation from Speaker's Emotional States in Daily Conversation

    Science.gov (United States)

    Mori, Hiroki; Ohshima, Koh

    A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.

  5. Asians' Facial Responsiveness to Basic Tastes by Automated Facial Expression Analysis System.

    Science.gov (United States)

    Zhi, Ruicong; Cao, Lianyu; Cao, Gang

    2017-03-01

    Growing evidence shows that consumer choices in real life are mostly driven by unconscious mechanisms rather than conscious. The unconscious process could be measured by behavioral measurements. This study aims to apply automatic facial expression analysis technique for consumers' emotion representation, and explore the relationships between sensory perception and facial responses. Basic taste solutions (sourness, sweetness, bitterness, umami, and saltiness) with 6 levels plus water were used, which could cover most of the tastes found in food and drink. The other contribution of this study is to analyze the characteristics of facial expressions and correlation between facial expressions and perceptive hedonic liking for Asian consumers. Up until now, the facial expression application researches only reported for western consumers, while few related researches investigated the facial responses during food consuming for Asian consumers. Experimental results indicated that facial expressions could identify different stimuli with various concentrations and different hedonic levels. The perceived liking increased at lower concentrations and decreased at higher concentrations, while samples with medium concentrations were perceived as the most pleasant except sweetness and bitterness. High correlations were founded between perceived intensities of bitterness, umami, saltiness, and facial reactions of disgust and fear. Facial expression disgust and anger could characterize emotion "dislike," and happiness could characterize emotion "like," while neutral could represent "neither like nor dislike." The identified facial expressions agree with the perceived sensory emotions elicited by basic taste solutions. The correlation analysis between hedonic levels and facial expression intensities obtained in this study are in accordance with that discussed for western consumers. © 2017 Institute of Food Technologists®.

  6. The Communicative Function of Sad Facial Expressions

    Directory of Open Access Journals (Sweden)

    Lawrence Ian Reed

    2017-03-01

    Full Text Available What are the communicative functions of sad facial expressions? Research shows that people feel sadness in response to losses but it’s unclear whether sad expressions function to communicate losses to others and if so, what makes these signals credible. Here we use economic games to test the hypothesis that sad expressions lend credibility to claims of loss. Participants play the role of either a proposer or recipient in a game with a fictional backstory and real monetary payoffs. The proposers view a (fictional video of the recipient’s character displaying either a neutral or sad expression paired with a claim of loss. The proposer then decided how much money to give to the recipient. In three experiments, we test alternative theories by using situations in which the recipient’s losses were uncertain (Experiment 1, the recipient’s losses were certain (Experiment 2, or the recipient claims failed gains rather than losses (Experiment 3. Overall, we find that participants gave more money to recipients who displayed sad expressions compared to neutral expressions, but only under conditions of uncertain loss. This finding supports the hypothesis that sad expressions function to increase the credibility of claims of loss.

  7. From facial expressions to bodily gestures

    Science.gov (United States)

    2016-01-01

    This article aims to determine to what extent photographic practices in psychology, psychiatry and physiology contributed to the definition of the external bodily signs of passions and emotions in the second half of the 19th century in France. Bridging the gap between recent research in the history of emotions and photographic history, the following analyses focus on the photographic production of scientists and photographers who made significant contributions to the study of expressions and gestures, namely Duchenne de Boulogne, Charles Darwin, Paul Richer and Albert Londe. This article argues that photography became a key technology in their works due to the adequateness of the exposure time of different cameras to the duration of the bodily manifestations to be recorded, and that these uses constituted facial expressions and bodily gestures as particular objects for the scientific study. PMID:26900264

  8. Clock gene expression in the murine gastrointestinal tract: endogenous rhythmicity and effects of a feeding regimen.

    Science.gov (United States)

    Hoogerwerf, Willemijntje A; Hellmich, Helen L; Cornélissen, Germaine; Halberg, Franz; Shahinian, Vahakn B; Bostwick, Jonathon; Savidge, Tor C; Cassone, Vincent M

    2007-10-01

    Based on observations that the gastrointestinal tract is subject to various 24-hour rhythmic processes, it is conceivable that some of these rhythms are under circadian clock gene control. We hypothesized that clock genes are present in the gastrointestinal tract and that they are part of a functional molecular clock that coordinates rhythmic physiologic functions. The effects of timed feeding and vagotomy on temporal clock gene expression (clock, bmal1, per1-3, cry1-2) in the gastrointestinal tract and suprachiasmatic nucleus (bmal, per2) of C57BL/6J mice were examined using real-time polymerase chain reaction and Western blotting (BMAL, PER2). Colonic clock gene localization was examined using immunohistochemistry (BMAL, PER1-2). Clock immunoreactivity was observed in the myenteric plexus and epithelial crypt cells. Clock genes were expressed rhythmically throughout the gastrointestinal tract. Timed feeding shifted clock gene expression at the RNA and protein level but did not shift clock gene expression in the central clock. Vagotomy did not alter gastric clock gene expression compared with sham-treated controls. The murine gastrointestinal tract contains functional clock genes, which are molecular core components of the circadian clock. Daytime feeding in nocturnal rodents is a strong synchronizer of gastrointestinal clock genes. This synchronization occurs independently of the central clock. Gastric clock gene expression is not mediated through the vagal nerve. The presence of clock genes in the myenteric plexus and epithelial cells suggests a role for clock genes in circadian coordination of gastrointestinal functions such as motility, cell proliferation, and migration.

  9. Real-Time Facial Expression Transfer with Single Video Camera

    OpenAIRE

    Liu, S.; Yang, Xiaosong; Wang, Z.; Xiao, Zhidong; Zhang, J.

    2016-01-01

    Facial expression transfer is currently an active research field. However, 2D image wrapping based methods suffer from depth ambiguity and specific hardware is required for depth-based methods to work. We present a novel markerless, real time online facial transfer method that requires only a single video camera. Our method adapts a model to user specific facial data, computes expression variances in real time and rapidly transfers them to another target. Our method can be applied to videos w...

  10. Impaired Overt Facial Mimicry in Response to Dynamic Facial Expressions in High-Functioning Autism Spectrum Disorders

    Science.gov (United States)

    Yoshimura, Sayaka; Sato, Wataru; Uono, Shota; Toichi, Motomi

    2015-01-01

    Previous electromyographic studies have reported that individuals with autism spectrum disorders (ASD) exhibited atypical patterns of facial muscle activity in response to facial expression stimuli. However, whether such activity is expressed in visible facial mimicry remains unknown. To investigate this issue, we videotaped facial responses in…

  11. Neural Mechanism of Facial Expression Perception in Intellectually Gifted Adolescents

    DEFF Research Database (Denmark)

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan

    2015-01-01

    The current study investigated the relationship between general intelligence and the three stages of facial expression processing. Two groups of adolescents with different levels of general intelligence were required to identify three types of facial expressions (happy, sad, and neutral faces...

  12. Dynamic Facial Expression Recognition With Atlas Construction and Sparse Representation.

    Science.gov (United States)

    Guo, Yimo; Zhao, Guoying; Pietikainen, Matti

    2016-05-01

    In this paper, a new dynamic facial expression recognition method is proposed. Dynamic facial expression recognition is formulated as a longitudinal groupwise registration problem. The main contributions of this method lie in the following aspects: 1) subject-specific facial feature movements of different expressions are described by a diffeomorphic growth model; 2) salient longitudinal facial expression atlas is built for each expression by a sparse groupwise image registration method, which can describe the overall facial feature changes among the whole population and can suppress the bias due to large intersubject facial variations; and 3) both the image appearance information in spatial domain and topological evolution information in temporal domain are used to guide recognition by a sparse representation method. The proposed framework has been extensively evaluated on five databases for different applications: the extended Cohn-Kanade, MMI, FERA, and AFEW databases for dynamic facial expression recognition, and UNBC-McMaster database for spontaneous pain expression monitoring. This framework is also compared with several state-of-the-art dynamic facial expression recognition methods. The experimental results demonstrate that the recognition rates of the new method are consistently higher than other methods under comparison.

  13. Trigger Features for Conveying Facial Expressions: The Role of Segmentation.

    Science.gov (United States)

    Ramachandran, Vilayanur S; Chunharas, Chaipat; Smythies, Michael

    2017-01-01

    Primates are especially good at recognizing facial expression using two contrasting strategies-an individual diagnostic feature (e.g., raise eyebrows or lower mouth corner) versus a relationship between features. We report several novel experiments that demonstrate a profound role of grouping and segmentation-including stereo-on recognition of facial expressions.

  14. Regression-based Multi-View Facial Expression Recognition

    NARCIS (Netherlands)

    Rudovic, Ognjen; Patras, Ioannis; Pantic, Maja

    2010-01-01

    We present a regression-based scheme for multi-view facial expression recognition based on 2蚠D geometric features. We address the problem by mapping facial points (e.g. mouth corners) from non-frontal to frontal view where further recognition of the expressions can be performed using a

  15. Facial appearance, gender, and emotion expression.

    Science.gov (United States)

    Hess, Ursula; Adams, Reginald B; Kleck, Robert E

    2004-12-01

    Western gender stereotypes describe women as affiliative and more likely to show happiness and men as dominant and more likely to show anger. The authors assessed the hypothesis that the gender-stereotypic effects on perceptions of anger and happiness are partially mediated by facial appearance markers of dominance and affiliation by equating men's and women's faces for these cues. In 2 studies, women were rated as more angry and men as more happy-a reversal of the stereotype. Ratings of sadness, however, were not systematically affected. It is posited that markers of affiliation and dominance, themselves confounded with gender, interact with the expressive cues for anger and happiness to produce emotional perceptions that have been viewed as simple gender stereotypes. copyright (c) 2004 APA, all rights reserved.

  16. Facial expression classification using EEG and gyroscope signals.

    Science.gov (United States)

    Toth, Jake; Arvaneh, Mahnaz

    2017-07-01

    In this paper muscle and gyroscope signals provided by a low cost EEG headset were used to classify six different facial expressions. Muscle activities generated by facial expressions are seen in EEG data recorded from scalp. Using the already present EEG device to classify facial expressions allows for a new hybrid brain-computer interface (BCI) system without introducing new hardware such as separate electromyography (EMG) electrodes. To classify facial expressions, time domain and frequency domain EEG data with different sampling rates were used as inputs of the classifiers. The experimental results showed that with sampling rates and classification methods optimized for each participant and feature set, high accuracy classification of facial expressions was achieved. Moreover, adding information extracted from a gyroscope embedded into the used EEG headset increased the performance by an average of 9 to 16%.

  17. Enhanced subliminal emotional responses to dynamic facial expressions

    Directory of Open Access Journals (Sweden)

    Wataru eSato

    2014-09-01

    Full Text Available Emotional processing without conscious awareness plays an important role in human social interaction. Several behavioral studies reported that subliminal presentation of photographs of emotional facial expressions induces unconscious emotional processing. However, it was difficult to elicit strong and robust effects using this method. We hypothesized that dynamic presentations of facial expressions would enhance subliminal emotional effects and tested this hypothesis with two experiments. Fearful or happy facial expressions were presented dynamically or statically in either the left or the right visual field for 20 (Experiment 1 and 30 (Experiment 2 ms. Nonsense target ideographs were then presented, and participants reported their preference for them. The results consistently showed that dynamic presentations of emotional facial expressions induced more evident emotional biases toward subsequent targets than did static ones. These results indicate that dynamic presentations of emotional facial expressions induce more evident unconscious emotional processing.

  18. Facial identity and facial expression are initially integrated at visual perceptual stages of face processing.

    Science.gov (United States)

    Fisher, Katie; Towler, John; Eimer, Martin

    2016-01-08

    It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Facial Expression Recognition Using Stationary Wavelet Transform Features

    Directory of Open Access Journals (Sweden)

    Huma Qayyum

    2017-01-01

    Full Text Available Humans use facial expressions to convey personal feelings. Facial expressions need to be automatically recognized to design control and interactive applications. Feature extraction in an accurate manner is one of the key steps in automatic facial expression recognition system. Current frequency domain facial expression recognition systems have not fully utilized the facial elements and muscle movements for recognition. In this paper, stationary wavelet transform is used to extract features for facial expression recognition due to its good localization characteristics, in both spectral and spatial domains. More specifically a combination of horizontal and vertical subbands of stationary wavelet transform is used as these subbands contain muscle movement information for majority of the facial expressions. Feature dimensionality is further reduced by applying discrete cosine transform on these subbands. The selected features are then passed into feed forward neural network that is trained through back propagation algorithm. An average recognition rate of 98.83% and 96.61% is achieved for JAFFE and CK+ dataset, respectively. An accuracy of 94.28% is achieved for MS-Kinect dataset that is locally recorded. It has been observed that the proposed technique is very promising for facial expression recognition when compared to other state-of-the-art techniques.

  20. The not face: A grammaticalization of facial expressions of emotion.

    Science.gov (United States)

    Benitez-Quiroz, C Fabian; Wilbur, Ronnie B; Martinez, Aleix M

    2016-05-01

    Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3-8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Colour Perception on Facial Expression towards Emotion

    Directory of Open Access Journals (Sweden)

    Rubita Sudirman

    2012-12-01

    Full Text Available This study is to investigate human perceptions on pairing of facial expressions of emotion with colours. A group of 27 subjects consisting mainly of younger and Malaysian had participated in this study. For each of the seven faces, which expresses the basic emotions neutral, happiness, surprise, anger, disgust, fear and sadness, a single colour is chosen from the eight basic colours for the match of best visual look to the face accordingly. The different emotions appear well characterized by a single colour. The approaches used in this experiment for analysis are psychology disciplines and colours engineering. These seven emotions are being matched by the subjects with their perceptions and feeling. Then, 12 male and 12 female data are randomly chosen from among the previous data to make a colour perception comparison between genders. The successes or failures in running of this test depend on the possibility of subjects to propose their every single colour for each expression. The result will translate into number and percentage as a guide for colours designers and psychology field.

  2. Parameterized Facial Expression Synthesis Based on MPEG-4

    Directory of Open Access Journals (Sweden)

    Raouzaiou Amaryllis

    2002-01-01

    Full Text Available In the framework of MPEG-4, one can include applications where virtual agents, utilizing both textual and multisensory data, including facial expressions and nonverbal speech help systems become accustomed to the actual feelings of the user. Applications of this technology are expected in educational environments, virtual collaborative workplaces, communities, and interactive entertainment. Facial animation has gained much interest within the MPEG-4 framework; with implementation details being an open research area (Tekalp, 1999. In this paper, we describe a method for enriching human computer interaction, focusing on analysis and synthesis of primary and intermediate facial expressions (Ekman and Friesen (1978. To achieve this goal, we utilize facial animation parameters (FAPs to model primary expressions and describe a rule-based technique for handling intermediate ones. A relation between FAPs and the activation parameter proposed in classical psychological studies is established, leading to parameterized facial expression analysis and synthesis notions, compatible with the MPEG-4 standard.

  3. Impaired recognition of facial emotional expressions in Multiple Sclerosis

    Directory of Open Access Journals (Sweden)

    Bruno Lenne

    2014-04-01

    Full Text Available This study investigated the ability of patients with Relapsing-Remitting Multiple Sclerosis (RRMS to recognize emotional facial expressions. Cognitive deterioration, depression, alexithymia and facial expression recognition ability were assessed in fifty-five patients and twenty-one controls. Facial expression recognition ability was measured by a forced-choice labeling procedure of five emotional facial expressions (anger, fear, sadness, happiness, none. RRMS patients exhibited a global impairment in the recognition of facial emotion (p=.00049, specifically for anger (p=.01, sadness (p=.0001, and fear, (p=.011. Deficit in emotion recognition was independent from disability (assessed by EDSS score. This deficit was correlated with depression and partially with cognitive deterioration. These results should be discussed in term of global cortico-cortical disconnections.

  4. Evaluation of facial expression in acute pain in cats.

    OpenAIRE

    Holden, E.; Calvo, G.; Collins, M.; Bell, A.; Reid, J.; Scott, E M.; Nolan, Andrea M.

    2014-01-01

    OBJECTIVESTo describe the development of a facial expression tool differentiating pain-free cats from those in acute pain.METHODSObservers shown facial images from painful and pain-free cats were asked to identify if they were in pain or not. From facial images, anatomical landmarks were identified and distances between these were mapped. Selected distances underwent statistical analysis to identify features discriminating pain-free and painful cats. Additionally, thumbnail photographs were r...

  5. Facial expression recognition based on improved deep belief networks

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to improve the robustness of facial expression recognition, a method of face expression recognition based on Local Binary Pattern (LBP) combined with improved deep belief networks (DBNs) is proposed. This method uses LBP to extract the feature, and then uses the improved deep belief networks as the detector and classifier to extract the LBP feature. The combination of LBP and improved deep belief networks is realized in facial expression recognition. In the JAFFE (Japanese Female Facial Expression) database on the recognition rate has improved significantly.

  6. Facial expression analysis with AFFDEX and FACET: A validation study.

    Science.gov (United States)

    Stöckli, Sabrina; Schulte-Mecklenbeck, Michael; Borer, Stefan; Samson, Andrea C

    2017-12-07

    The goal of this study was to validate AFFDEX and FACET, two algorithms classifying emotions from facial expressions, in iMotions's software suite. In Study 1, pictures of standardized emotional facial expressions from three databases, the Warsaw Set of Emotional Facial Expression Pictures (WSEFEP), the Amsterdam Dynamic Facial Expression Set (ADFES), and the Radboud Faces Database (RaFD), were classified with both modules. Accuracy (Matching Scores) was computed to assess and compare the classification quality. Results show a large variance in accuracy across emotions and databases, with a performance advantage for FACET over AFFDEX. In Study 2, 110 participants' facial expressions were measured while being exposed to emotionally evocative pictures from the International Affective Picture System (IAPS), the Geneva Affective Picture Database (GAPED) and the Radboud Faces Database (RaFD). Accuracy again differed for distinct emotions, and FACET performed better. Overall, iMotions can achieve acceptable accuracy for standardized pictures of prototypical (vs. natural) facial expressions, but performs worse for more natural facial expressions. We discuss potential sources for limited validity and suggest research directions in the broader context of emotion research.

  7. Visual Working Memory Capacity for Emotional Facial Expressions

    Directory of Open Access Journals (Sweden)

    Domagoj Švegar

    2011-12-01

    Full Text Available The capacity of visual working memory is limited to no more than four items. At the same time, it is limited not only by the number of objects, but also by the total amount of information that needs to be memorized, and the relation between the information load per object and the number of objects that can be stored into visual working memory is inverse. The objective of the present experiment was to compute visual working memory capacity for emotional facial expressions, and in order to do so, change detection tasks were applied. Pictures of human emotional facial expressions were presented to 24 participants in 1008 experimental trials, each of which began with a presentation of a fixation mark, which was followed by a short simultaneous presentation of six emotional facial expressions. After that, a blank screen was presented, and after such inter-stimulus interval, one facial expression was presented at one of previously occupied locations. Participants had to answer if the facial expression presented at test is different or identical as the expression presented at that same location before the retention interval. Memory capacity was estimated through accuracy of responding, by the formula constructed by Pashler (1988, adopted from signal detection theory. It was found that visual working memory capacity for emotional facial expressions equals 3.07, which is high compared to capacity for facial identities and other visual stimuli. The obtained results were explained within the framework of evolutionary psychology.

  8. How successful are future teachers in interpreting facial expressions?

    Directory of Open Access Journals (Sweden)

    Milovanović Radmila B.

    2016-01-01

    Full Text Available Facial expressions have a prominent role in communication and social interaction since they are connected with emotional experience. Starting from the importance of communication based on the respect for the child's emotions in education, the aim of our research was to examine the possibilities for enhancing the successfulness of future kindergarten and elementary school teachers by instructing them to understand facial expressions of children. The sample comprised 330 first year students of the Pedagogic Faculty at the University of Kragujevac. The successfulness in elucidation of facial expressions was measured by Paul Ekman's Test for facial expression understanding. The research design included initial testing, education in the field of emotional life (6 hours of theory and 6 hours of exercises and final examination. The results show that the future preschool and elementary school teachers were equally unsuccessful in understanding facial expressions at the initial testing, but showed a significant progress at final testing (Z=-6.745; p<0.01, Bearing in mind that adequate interpretation of facial expressions is an important component of communicative competences, that communicative competencies are a necessary factor of professional teacher competencies and an important criterion of suitability for pedagogic profession, the results show that it is necessary to educate students to enhance their ability to understand facial expressions and learn more about emotional life in general.

  9. Discrimination of gender using facial image with expression change

    Science.gov (United States)

    Kuniyada, Jun; Fukuda, Takahiro; Terada, Kenji

    2005-12-01

    By carrying out marketing research, the managers of large-sized department stores or small convenience stores obtain the information such as ratio of men and women of visitors and an age group, and improve their management plan. However, these works are carried out in the manual operations, and it becomes a big burden to small stores. In this paper, the authors propose a method of men and women discrimination by extracting difference of the facial expression change from color facial images. Now, there are a lot of methods of the automatic recognition of the individual using a motion facial image or a still facial image in the field of image processing. However, it is very difficult to discriminate gender under the influence of the hairstyle and clothes, etc. Therefore, we propose the method which is not affected by personality such as size and position of facial parts by paying attention to a change of an expression. In this method, it is necessary to obtain two facial images with an expression and an expressionless. First, a region of facial surface and the regions of facial parts such as eyes, nose, and mouth are extracted in the facial image with color information of hue and saturation in HSV color system and emphasized edge information. Next, the features are extracted by calculating the rate of the change of each facial part generated by an expression change. In the last step, the values of those features are compared between the input data and the database, and the gender is discriminated. In this paper, it experimented for the laughing expression and smile expression, and good results were provided for discriminating gender.

  10. Image ratio features for facial expression recognition application.

    Science.gov (United States)

    Song, Mingli; Tao, Dacheng; Liu, Zicheng; Li, Xuelong; Zhou, Mengchu

    2010-06-01

    Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e.g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.

  11. Rhythmic expression of Nocturnin mRNA in multiple tissues of the mouse

    Directory of Open Access Journals (Sweden)

    Green Carla B

    2001-05-01

    Full Text Available Abstract Background Nocturnin was originally identified by differential display as a circadian clock regulated gene with high expression at night in photoreceptors of the African clawed frog, Xenopus laevis. Although encoding a novel protein, the nocturnin cDNA had strong sequence similarity with a C-terminal domain of the yeast transcription factor CCR4, and with mouse and human ESTs. Since its original identification others have cloned mouse and human homologues of nocturnin/CCR4, and we have cloned a full-length cDNA from mouse retina, along with partial cDNAs from human, cow and chicken. The goal of this study was to determine the temporal pattern of nocturnin mRNA expression in multiple tissues of the mouse. Results cDNA sequence analysis revealed a high degree of conservation among vertebrate nocturnin/CCR4 homologues along with a possible homologue in Drosophila. Northern analysis of mRNA in C3H/He and C57/Bl6 mice revealed that the mNoc gene is expressed in a broad range of tissues, with greatest abundance in liver, kidney and testis. mNoc is also expressed in multiple brain regions including suprachiasmatic nucleus and pineal gland. Furthermore, mNoc exhibits circadian rhythmicity of mRNA abundance with peak levels at the time of light offset in the retina, spleen, heart, kidney and liver. Conclusion The widespread expression and rhythmicity of mNoc mRNA parallels the widespread expression of other circadian clock genes in mammalian tissues, and suggests that nocturnin plays an important role in clock function or as a circadian clock effector.

  12. Perception of temporal asymmetries in dynamic facial expressions

    Directory of Open Access Journals (Sweden)

    Maren eReinl

    2015-08-01

    Full Text Available In the current study we examined whether timeline-reversals and emotional direction of dynamic facial expressions affect subjective experience of human observers. We recorded natural movies of faces that increased or decreased their expressions of fear, and played them either in the natural frame order or reversed from last to first frame (reversed timeline. This led to four conditions of increasing or decreasing fear, either following the natural or reversed temporal trajectory of facial dynamics. This 2-by-2 factorial design controlled for visual low-level properties, static visual content, and motion energy across the different factors. It allowed us to examine perceptual consequences that would occur if the timeline trajectory of facial muscle movements during the increase of an emotion are not the exact mirror of the timeline during the decrease. It additionally allowed us to study perceptual differences between increasing and decreasing emotional expressions. Perception of these time-dependent asymmetries have not yet been quantified. We found that three emotional measures, emotional intensity, artificialness of facial movement, and convincingness or plausibility of emotion portrayal, were affected by timeline reversals as well as by the emotional direction of the facial expressions. Our results imply that natural dynamic facial expressions contain temporal asymmetries, and show that deviations from the natural timeline lead to a reduction of perceived emotional intensity and convincingness, and to an increase of perceived artificialness of the dynamic facial expression. In addition, they show that decreasing facial expressions are judged as less plausible than increasing facial expressions. Our findings are of relevance for both, behavioural as well as neuroimaging studies, as processing and perception are influenced by temporal asymmetries.

  13. [Facial motoneurons death and caspase regulatory gene expression following facial nerve injury].

    Science.gov (United States)

    Wei, Hai-Gang; Li, Shu-Guang; Chen, Yu-Ting; Cai, Chao-Xiong; Xu, Biao

    2015-02-01

    To investigate the morphology of facial motoneurons and its death course as well as caspase 3, caspase 8, cyto-c death proteins' expression and their correlation following facial nerve distal transection or crush in rats. The right facial nerve underwent distal transaction and crush as experimental group, while the left facial nerve acted as normal control. We observed the morphology and the death course of motoneurons by light microscope and transmission electron microscope. Expressions of caspase 3, 8, and cyto-c protein were studied by immunohistochemistry (S-P) and image analysis. SPSS 10.0 software package was used for statistical analysis. (1) Both axon distal transection and axon crush resulted in death of facial motoneurons. The motoneurons' loss reached peak 28 days after injuries and were mainly through apoptotic pathway. The number of motoneurons' loss in the distal transection group were more than that in the crush group. (2) Caspase 3, caspase 8 and cyto-c protein expressions were observed in wide spread areas of normal rat facial nucleus. In addition to neurons, glial cells were also stained. Cells of the distal transection group stained more strongly than that of crush group. Expressions of the proteins began to increase 3 days after the injuries. Caspase 3 and caspase 8 protein expression reached peak 14 days whereas cyto-c protein expression reached peak 7 days after the injuries. Expression of caspase 8 and cytoc protein were correlated with expression of caspase 3 protein. (1)Different facial nerve injuries result in death of facial motoneurons. The loss of motoneurons is related with the injuries' patterns. Clinical nerve reparation should be performed as early as possible within 4 weeks after the transection. (2)The expression of caspase 3, 8 and cyto-c protein were related with facial nerve injuries' patterns. Caspase 8 and cyto-c protein expressions were correlated with caspase 3 protein expression, indicating that caspase 8 and cyto-c may

  14. Children's Representations of Facial Expression and Identity: Identity-Contingent Expression Aftereffects

    Science.gov (United States)

    Vida, Mark D.; Mondloch, Catherine J.

    2009-01-01

    This investigation used adaptation aftereffects to examine developmental changes in the perception of facial expressions. Previous studies have shown that adults' perceptions of ambiguous facial expressions are biased following adaptation to intense expressions. These expression aftereffects are strong when the adapting and probe expressions share…

  15. Rhythmic expressed clock regulates the transcription of proliferating cellular nuclear antigen in teleost retina.

    Science.gov (United States)

    Song, Hang; Wang, Defeng; De Jesus Perez, Felipe; Xie, Rongrong; Liu, Zhipeng; Chen, Chun-Chun; Yu, Meijuan; Yuan, Liudi; Fernald, Russell D; Zhao, Sheng

    2017-07-01

    Teleost fish continues to grow their eyes throughout life with the body size. In Astatotilapia burtoni, the fish retina increases by adding new retinal cells at the ciliary marginal zone (CMZ) and in the outer nuclear layer (ONL). Cell proliferation at both sites exhibits a daily rhythm in number of dividing cells. To understand how this diurnal rhythm of new cell production is controlled in retinal progenitor cells, we studied the transcription pattern of clock genes in retina, including clock1a, clock1b, bmal1a (brain and muscle ARNT-Like), and per1b (period1b). We found that these genes have a strong diurnal rhythmic transcription during light-dark cycles but not in constant darkness. An oscillation in pcna transcription was also observed during light-dark cycles, but again not in constant darkness. Our results also indicate an association between Clock proteins and the upstream region of pcna (proliferating cellular nuclear antigen) gene. A luciferase reporter assay conducted in an inducible clock knockdown cell line further demonstrated that the mutation on predicted E-Boxes in pcna promoter region significantly attenuated the transcriptional activation induced by Clock protein. These results suggested that the diurnal rhythmic expression of clock genes in A. burtoni retina could be light dependent and might contribute to the daily regulation of the proliferation of the retina progenitors through key components of cell cycle machinery, for instance, pcna. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Facial Expression Recognition for Traumatic Brain Injured Patients

    DEFF Research Database (Denmark)

    Ilyas, Chaudhary Muhammad Aqdus; Nasrollahi, Kamal; Moeslund, Thomas B.

    2018-01-01

    In this paper, we investigate the issues associated with facial expression recognition of Traumatic Brain Insured (TBI) patients in a realistic scenario. These patients have restricted or limited muscle movements with reduced facial expressions along with non-cooperative behavior, impaired...... reasoning and inappropriate responses. All these factors make automatic understanding of their expressions more complex. While the existing facial expression recognition systems showed high accuracy by taking data from healthy subjects, their performance is yet to be proved for real TBI patient data...... by considering the aforementioned challenges. To deal with this, we devised scenarios for data collection from the real TBI patients, collected data which is very challenging to process, devised effective way of data preprocessing so that good quality faces can be extracted from the patients facial video...

  17. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2011-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions.

  18. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  19. Face Processing in Children with Autism Spectrum Disorder: Independent or Interactive Processing of Facial Identity and Facial Expression?

    Science.gov (United States)

    Krebs, Julia F.; Biswas, Ajanta; Pascalis, Olivier; Kamp-Becker, Inge; Remschmidt, Helmuth; Schwarzer, Gudrun

    2011-01-01

    The current study investigated if deficits in processing emotional expression affect facial identity processing and vice versa in children with autism spectrum disorder. Children with autism and IQ and age matched typically developing children classified faces either by emotional expression, thereby ignoring facial identity or by facial identity…

  20. Fast and Accurate Digital Morphometry of Facial Expressions.

    Science.gov (United States)

    Grewe, Carl Martin; Schreiber, Lisa; Zachow, Stefan

    2015-10-01

    Facial surgery deals with a part of the human body that is of particular importance in everyday social interactions. The perception of a person's natural, emotional, and social appearance is significantly influenced by one's expression. This is why facial dynamics has been increasingly studied by both artists and scholars since the mid-Renaissance. Currently, facial dynamics and their importance in the perception of a patient's identity play a fundamental role in planning facial surgery. Assistance is needed for patient information and communication, and documentation and evaluation of the treatment as well as during the surgical procedure. Here, the quantitative assessment of morphological features has been facilitated by the emergence of diverse digital imaging modalities in the last decades. Unfortunately, the manual data preparation usually needed for further quantitative analysis of the digitized head models (surface registration, landmark annotation) is time-consuming, and thus inhibits its use for treatment planning and communication. In this article, we refer to historical studies on facial dynamics, briefly present related work from the field of facial surgery, and draw implications for further developments in this context. A prototypical stereophotogrammetric system for high-quality assessment of patient-specific 3D dynamic morphology is described. An individual statistical model of several facial expressions is computed, and possibilities to address a broad range of clinical questions in facial surgery are demonstrated. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  1. Meta-analysis of Drosophila circadian microarray studies identifies a novel set of rhythmically expressed genes.

    Directory of Open Access Journals (Sweden)

    Kevin P Keegan

    2007-11-01

    Full Text Available Five independent groups have reported microarray studies that identify dozens of rhythmically expressed genes in the fruit fly Drosophila melanogaster. Limited overlap among the lists of discovered genes makes it difficult to determine which, if any, exhibit truly rhythmic patterns of expression. We reanalyzed data from all five reports and found two sources for the observed discrepancies, the use of different expression pattern detection algorithms and underlying variation among the datasets. To improve upon the methods originally employed, we developed a new analysis that involves compilation of all existing data, application of identical transformation and standardization procedures followed by ANOVA-based statistical prescreening, and three separate classes of post hoc analysis: cross-correlation to various cycling waveforms, autocorrelation, and a previously described fast Fourier transform-based technique. Permutation-based statistical tests were used to derive significance measures for all post hoc tests. We find application of our method, most significantly the ANOVA prescreening procedure, significantly reduces the false discovery rate relative to that observed among the results of the original five reports while maintaining desirable statistical power. We identify a set of 81 cycling transcripts previously found in one or more of the original reports as well as a novel set of 133 transcripts not found in any of the original studies. We introduce a novel analysis method that compensates for variability observed among the original five Drosophila circadian array reports. Based on the statistical fidelity of our meta-analysis results, and the results of our initial validation experiments (quantitative RT-PCR, we predict many of our newly found genes to be bona fide cyclers, and suggest that they may lead to new insights into the pathways through which clock mechanisms regulate behavioral rhythms.

  2. Facial expression (mood recognition from facial images using committee neural networks

    Directory of Open Access Journals (Sweden)

    Hariharan SI

    2009-08-01

    Full Text Available Abstract Background Facial expressions are important in facilitating human communication and interactions. Also, they are used as an important tool in behavioural studies and in medical rehabilitation. Facial image based mood detection techniques may provide a fast and practical approach for non-invasive mood detection. The purpose of the present study was to develop an intelligent system for facial image based expression classification using committee neural networks. Methods Several facial parameters were extracted from a facial image and were used to train several generalized and specialized neural networks. Based on initial testing, the best performing generalized and specialized neural networks were recruited into decision making committees which formed an integrated committee neural network system. The integrated committee neural network system was then evaluated using data obtained from subjects not used in training or in initial testing. Results and conclusion The system correctly identified the correct facial expression in 255 of the 282 images (90.43% of the cases, from 62 subjects not used in training or in initial testing. Committee neural networks offer a potential tool for image based mood detection.

  3. Facial expression (mood) recognition from facial images using committee neural networks.

    Science.gov (United States)

    Kulkarni, Saket S; Reddy, Narender P; Hariharan, S I

    2009-08-05

    Facial expressions are important in facilitating human communication and interactions. Also, they are used as an important tool in behavioural studies and in medical rehabilitation. Facial image based mood detection techniques may provide a fast and practical approach for non-invasive mood detection. The purpose of the present study was to develop an intelligent system for facial image based expression classification using committee neural networks. Several facial parameters were extracted from a facial image and were used to train several generalized and specialized neural networks. Based on initial testing, the best performing generalized and specialized neural networks were recruited into decision making committees which formed an integrated committee neural network system. The integrated committee neural network system was then evaluated using data obtained from subjects not used in training or in initial testing. The system correctly identified the correct facial expression in 255 of the 282 images (90.43% of the cases), from 62 subjects not used in training or in initial testing. Committee neural networks offer a potential tool for image based mood detection.

  4. Empathy, emotional contagion, and rapid facial reactions to angry and happy facial expressions.

    Science.gov (United States)

    Dimberg, Ulf; Thunberg, Monika

    2012-12-01

    The aim was to explore whether emotional empathy is related to the capacity to react with rapid facial reactions to facial expressions of emotion, and if emotional empathy is related to the ability to experience a similar emotion as expressed by another person. People high or low in emotional empathy were exposed to pictures of happy and angry faces while their facial electromyography from the zygomaticus major and corrugator supercilii muscle regions was detected. High empathy participants rapidly reacted with larger zygomatic muscle activity to happy as compared with angry faces as early as after 500 ms after stimulus onset, and with larger corrugator muscle activity to angry than to happy faces after 500 ms. Accordingly, this group also reacted with a corresponding experience of emotion. The low empathy participants, in contrast, did not differentiate between the happy and angry stimuli with either facial muscles or in their self experience of emotion. The findings are related to the facial feedback hypothesis and the results are interpreted as support for the hypothesis that rapid and automatically evoked facial mimicry may be one important mechanism for emotional and empathic contagion to occur. © 2012 The Institute of Psychology, Chinese Academy of Sciences and Blackwell Publishing Asia Pty Ltd.

  5. Blood-gene expression reveals reduced circadian rhythmicity in individuals resistant to sleep deprivation.

    Science.gov (United States)

    Arnardottir, Erna S; Nikonova, Elena V; Shockley, Keith R; Podtelezhnikov, Alexei A; Anafi, Ron C; Tanis, Keith Q; Maislin, Greg; Stone, David J; Renger, John J; Winrow, Christopher J; Pack, Allan I

    2014-10-01

    To address whether changes in gene expression in blood cells with sleep loss are different in individuals resistant and sensitive to sleep deprivation. Blood draws every 4 h during a 3-day study: 24-h normal baseline, 38 h of continuous wakefulness and subsequent recovery sleep, for a total of 19 time-points per subject, with every 2-h psychomotor vigilance task (PVT) assessment when awake. Sleep laboratory. Fourteen subjects who were previously identified as behaviorally resistant (n = 7) or sensitive (n = 7) to sleep deprivation by PVT. Thirty-eight hours of continuous wakefulness. We found 4,481 unique genes with a significant 24-h diurnal rhythm during a normal sleep-wake cycle in blood (false discovery rate [FDR] sleep. After accounting for circadian effects, two genes (SREBF1 and CPT1A, both involved in lipid metabolism) exhibited small, but significant, linear changes in expression with the duration of sleep deprivation (FDR sleep deprivation was a reduction in the amplitude of the diurnal rhythm of expression of normally cycling probe sets. This reduction was noticeably higher in behaviorally resistant subjects than sensitive subjects, at any given P value. Furthermore, blood cell type enrichment analysis showed that the expression pattern difference between sensitive and resistant subjects is mainly found in cells of myeloid origin, such as monocytes. Individual differences in behavioral effects of sleep deprivation are associated with differences in diurnal amplitude of gene expression for genes that show circadian rhythmicity. © 2014 Associated Professional Sleep Societies, LLC.

  6. Slowing down presentation of facial movements and vocal sounds enhances facial expression recognition and induces facial-vocal imitation in children with autism.

    Science.gov (United States)

    Tardif, Carole; Lainé, France; Rodriguez, Mélissa; Gepner, Bruno

    2007-09-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on CD-Rom, under audio or silent conditions, and under dynamic visual conditions (slowly, very slowly, at normal speed) plus a static control. Overall, children with autism showed lower performance in expression recognition and more induced facial-vocal imitation than controls. In the autistic group, facial expression recognition and induced facial-vocal imitation were significantly enhanced in slow conditions. Findings may give new perspectives for understanding and intervention for verbal and emotional perceptive and communicative impairments in autistic populations.

  7. What do facial expressions of emotion express in young children? The relationship between facial display and EMG measures

    Directory of Open Access Journals (Sweden)

    Michela Balconi

    2014-04-01

    Full Text Available The present paper explored the relationship between emotional facial response and electromyographic modulation in children when they observe facial expression of emotions. Facial responsiveness (evaluated by arousal and valence ratings and psychophysiological correlates (facial electromyography, EMG were analyzed when children looked at six facial expressions of emotions (happiness, anger, fear, sadness, surprise and disgust. About EMG measure, corrugator and zygomatic muscle activity was monitored in response to different emotional types. ANOVAs showed differences for both EMG and facial response across the subjects, as a function of different emotions. Specifically, some emotions were well expressed by all the subjects (such as happiness, anger and fear in terms of high arousal, whereas some others were less level arousal (such as sadness. Zygomatic activity was increased mainly for happiness, from one hand, corrugator activity was increased mainly for anger, fear and surprise, from the other hand. More generally, EMG and facial behavior were highly correlated each other, showing a “mirror” effect with respect of the observed faces.

  8. Do Dynamic Compared to Static Facial Expressions of Happiness and Anger Reveal Enhanced Facial Mimicry?

    Directory of Open Access Journals (Sweden)

    Krystyna Rymarczyk

    Full Text Available Facial mimicry is the spontaneous response to others' facial expressions by mirroring or matching the interaction partner. Recent evidence suggested that mimicry may not be only an automatic reaction but could be dependent on many factors, including social context, type of task in which the participant is engaged, or stimulus properties (dynamic vs static presentation. In the present study, we investigated the impact of dynamic facial expression and sex differences on facial mimicry and judgment of emotional intensity. Electromyography recordings were recorded from the corrugator supercilii, zygomaticus major, and orbicularis oculi muscles during passive observation of static and dynamic images of happiness and anger. The ratings of the emotional intensity of facial expressions were also analysed. As predicted, dynamic expressions were rated as more intense than static ones. Compared to static images, dynamic displays of happiness also evoked stronger activity in the zygomaticus major and orbicularis oculi, suggesting that subjects experienced positive emotion. No muscles showed mimicry activity in response to angry faces. Moreover, we found that women exhibited greater zygomaticus major muscle activity in response to dynamic happiness stimuli than static stimuli. Our data support the hypothesis that people mimic positive emotions and confirm the importance of dynamic stimuli in some emotional processing.

  9. Do Dynamic Compared to Static Facial Expressions of Happiness and Anger Reveal Enhanced Facial Mimicry?

    Science.gov (United States)

    Rymarczyk, Krystyna; Żurawski, Łukasz; Jankowiak-Siuda, Kamila; Szatkowska, Iwona

    2016-01-01

    Facial mimicry is the spontaneous response to others' facial expressions by mirroring or matching the interaction partner. Recent evidence suggested that mimicry may not be only an automatic reaction but could be dependent on many factors, including social context, type of task in which the participant is engaged, or stimulus properties (dynamic vs static presentation). In the present study, we investigated the impact of dynamic facial expression and sex differences on facial mimicry and judgment of emotional intensity. Electromyography recordings were recorded from the corrugator supercilii, zygomaticus major, and orbicularis oculi muscles during passive observation of static and dynamic images of happiness and anger. The ratings of the emotional intensity of facial expressions were also analysed. As predicted, dynamic expressions were rated as more intense than static ones. Compared to static images, dynamic displays of happiness also evoked stronger activity in the zygomaticus major and orbicularis oculi, suggesting that subjects experienced positive emotion. No muscles showed mimicry activity in response to angry faces. Moreover, we found that women exhibited greater zygomaticus major muscle activity in response to dynamic happiness stimuli than static stimuli. Our data support the hypothesis that people mimic positive emotions and confirm the importance of dynamic stimuli in some emotional processing.

  10. GENDER DIFFERENCES IN THE RECOGNITION OF FACIAL EXPRESSIONS OF EMOTION

    Directory of Open Access Journals (Sweden)

    CARLOS FELIPE PARDO-VÉLEZ

    2003-07-01

    Full Text Available Gender differences in the recognition of facial expressions of anger, happiness and sadness wereresearched in students 18-25 years of age. A reaction time procedure was used, and the percentage ofcorrect answers when recognizing was also measured. Though the work hypothesis expected genderdifferences in facial expression recognition, results suggest that these differences are not significant at alevel of 0.05%. Statistical analysis shows a greater easiness (at a non-significant level for women torecognize happiness expressions, and for men to recognize anger expressions. The implications ofthese data are discussed, and possible extensions of this investigation in terms of sample size andcollege major of the participants.

  11. Recognition of Face and Emotional Facial Expressions in Autism

    Directory of Open Access Journals (Sweden)

    Muhammed Tayyib Kadak

    2013-03-01

    Full Text Available Autism is a genetically transferred neurodevelopmental disorder characterized by severe and permanent deficits in many interpersonal relation areas like communication, social interaction and emotional responsiveness. Patients with autism have deficits in face recognition, eye contact and recognition of emotional expression. Both recognition of face and expression of facial emotion carried on face processing. Structural and functional impairment in fusiform gyrus, amygdala, superior temporal sulcus and other brain regions lead to deficits in recognition of face and facial emotion. Therefore studies suggest that face processing deficits resulted in problems in areas of social interaction and emotion in autism. Studies revealed that children with autism had problems in recognition of facial expression and used mouth region more than eye region. It was also shown that autistic patients interpreted ambiguous expressions as negative emotion. In autism, deficits related in various stages of face processing like detection of gaze, face identity, recognition of emotional expression were determined, so far. Social interaction impairments in autistic spectrum disorders originated from face processing deficits during the periods of infancy, childhood and adolescence. Recognition of face and expression of facial emotion could be affected either automatically by orienting towards faces after birth, or by “learning” processes in developmental periods such as identity and emotion processing. This article aimed to review neurobiological basis of face processing and recognition of emotional facial expressions during normal development and in autism.

  12. Continuous pain intensity estimation from facial expressions

    NARCIS (Netherlands)

    Kaltwang, Sebastian; Rudovic, Ognjen; Pantic, Maja

    2012-01-01

    Automatic pain recognition is an evolving research area with promising applications in health care. In this paper, we propose the first fully automatic approach to continuous pain intensity estimation from facial images. We first learn a set of independent regression functions for continuous pain

  13. Recognition, Expression, and Understanding Facial Expressions of Emotion in Adolescents with Nonverbal and General Learning Disabilities

    Science.gov (United States)

    Bloom, Elana; Heath, Nancy

    2010-01-01

    Children with nonverbal learning disabilities (NVLD) have been found to be worse at recognizing facial expressions than children with verbal learning disabilities (LD) and without LD. However, little research has been done with adolescents. In addition, expressing and understanding facial expressions is yet to be studied among adolescents with LD…

  14. Rapid Facial Reactions to Emotional Facial Expressions in Typically Developing Children and Children with Autism Spectrum Disorder

    Science.gov (United States)

    Beall, Paula M.; Moody, Eric J.; McIntosh, Daniel N.; Hepburn, Susan L.; Reed, Catherine L.

    2008-01-01

    Typical adults mimic facial expressions within 1000ms, but adults with autism spectrum disorder (ASD) do not. These rapid facial reactions (RFRs) are associated with the development of social-emotional abilities. Such interpersonal matching may be caused by motor mirroring or emotional responses. Using facial electromyography (EMG), this study…

  15. Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia

    Science.gov (United States)

    Palermo, Romina; Willis, Megan L.; Rivolta, Davide; McKone, Elinor; Wilson, C. Ellie; Calder, Andrew J.

    2011-01-01

    We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and ‘social’). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability. PMID:21333662

  16. Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Yongrui Huang

    2017-01-01

    Full Text Available This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression. The stimuli are based on a subset of movie clips that correspond to four specific areas of valance-arousal emotional space (happiness, neutral, sadness, and fear. For facial expression detection, four basic emotion states (happiness, neutral, sadness, and fear are detected by a neural network classifier. For EEG detection, four basic emotion states and three emotion intensity levels (strong, ordinary, and weak are detected by two support vector machines (SVM classifiers, respectively. Emotion recognition is based on two decision-level fusion methods of both EEG and facial expression detections by using a sum rule or a production rule. Twenty healthy subjects attended two experiments. The results show that the accuracies of two multimodal fusion detections are 81.25% and 82.75%, respectively, which are both higher than that of facial expression (74.38% or EEG detection (66.88%. The combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources.

  17. Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition

    Science.gov (United States)

    Huang, Yongrui; Yang, Jianhao; Liao, Pengkai

    2017-01-01

    This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression. The stimuli are based on a subset of movie clips that correspond to four specific areas of valance-arousal emotional space (happiness, neutral, sadness, and fear). For facial expression detection, four basic emotion states (happiness, neutral, sadness, and fear) are detected by a neural network classifier. For EEG detection, four basic emotion states and three emotion intensity levels (strong, ordinary, and weak) are detected by two support vector machines (SVM) classifiers, respectively. Emotion recognition is based on two decision-level fusion methods of both EEG and facial expression detections by using a sum rule or a production rule. Twenty healthy subjects attended two experiments. The results show that the accuracies of two multimodal fusion detections are 81.25% and 82.75%, respectively, which are both higher than that of facial expression (74.38%) or EEG detection (66.88%). The combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources. PMID:29056963

  18. Unseen facial and bodily expressions trigger fast emotional reactions.

    Science.gov (United States)

    Tamietto, Marco; Castelli, Lorys; Vighetti, Sergio; Perozzo, Paola; Geminiani, Giuliano; Weiskrantz, Lawrence; de Gelder, Beatrice

    2009-10-20

    The spontaneous tendency to synchronize our facial expressions with those of others is often termed emotional contagion. It is unclear, however, whether emotional contagion depends on visual awareness of the eliciting stimulus and which processes underlie the unfolding of expressive reactions in the observer. It has been suggested either that emotional contagion is driven by motor imitation (i.e., mimicry), or that it is one observable aspect of the emotional state arising when we see the corresponding emotion in others. Emotional contagion reactions to different classes of consciously seen and "unseen" stimuli were compared by presenting pictures of facial or bodily expressions either to the intact or blind visual field of two patients with unilateral destruction of the visual cortex and ensuing phenomenal blindness. Facial reactions were recorded using electromyography, and arousal responses were measured with pupil dilatation. Passive exposure to unseen expressions evoked faster facial reactions and higher arousal compared with seen stimuli, therefore indicating that emotional contagion occurs also when the triggering stimulus cannot be consciously perceived because of cortical blindness. Furthermore, stimuli that are very different in their visual characteristics, such as facial and bodily gestures, induced highly similar expressive responses. This shows that the patients did not simply imitate the motor pattern observed in the stimuli, but resonated to their affective meaning. Emotional contagion thus represents an instance of truly affective reactions that may be mediated by visual pathways of old evolutionary origin bypassing cortical vision while still providing a cornerstone for emotion communication and affect sharing.

  19. Perception of facial expressions produced by configural relations

    Directory of Open Access Journals (Sweden)

    V A Barabanschikov

    2010-06-01

    Full Text Available The authors discuss the problem of perception of facial expressions produced by configural features. Experimentally found configural features influence the perception of emotional expression of subjectively emotionless face. Classical results by E. Brunsvik related to perception of schematic faces are partly confirmed.

  20. Training Facial Expression Production in Children on the Autism Spectrum

    Science.gov (United States)

    Gordon, Iris; Pierce, Matthew D.; Bartlett, Marian S.; Tanaka, James W.

    2014-01-01

    Children with autism spectrum disorder (ASD) show deficits in their ability to produce facial expressions. In this study, a group of children with ASD and IQ-matched, typically developing (TD) children were trained to produce "happy" and "angry" expressions with the FaceMaze computer game. FaceMaze uses an automated computer…

  1. Computerised analysis of facial emotion expression in eating disorders.

    Science.gov (United States)

    Leppanen, Jenni; Dapelo, Marcela Marin; Davies, Helen; Lang, Katie; Treasure, Janet; Tchanturia, Kate

    2017-01-01

    Problems with social-emotional processing are known to be an important contributor to the development and maintenance of eating disorders (EDs). Diminished facial communication of emotion has been frequently reported in individuals with anorexia nervosa (AN). Less is known about facial expressivity in bulimia nervosa (BN) and in people who have recovered from AN (RecAN). This study aimed to pilot the use of computerised facial expression analysis software to investigate emotion expression across the ED spectrum and recovery in a large sample of participants. 297 participants with AN, BN, RecAN, and healthy controls were recruited. Participants watched film clips designed to elicit happy or sad emotions, and facial expressions were then analysed using FaceReader. The finding mirrored those from previous work showing that healthy control and RecAN participants expressed significantly more positive emotions during the positive clip compared to the AN group. There were no differences in emotion expression during the sad film clip. These findings support the use of computerised methods to analyse emotion expression in EDs. The findings also demonstrate that reduced positive emotion expression is likely to be associated with the acute stage of AN illness, with individuals with BN showing an intermediate profile.

  2. Attention to Facial Emotion Expressions in Children with Autism

    Science.gov (United States)

    Begeer, Sander; Rieffe, Carolien; Terwogt, Mark Meerum; Stockmann, Lex

    2006-01-01

    High-functioning children in the autism spectrum are frequently noted for their impaired attention to facial expressions of emotions. In this study, we examined whether attention to emotion cues in others could be enhanced in children with autism, by varying the relevance of children's attention to emotion expressions. Twenty-eight…

  3. 3D Facial Landmarking under Expression, Pose, and Occlusion Variations

    NARCIS (Netherlands)

    H. Dibeklioğ lu; A.A. Salah (Albert Ali); L. Akarun

    2008-01-01

    htmlabstractAutomatic localization of 3D facial features is important for face recognition, tracking, modeling and expression analysis. Methods developed for 2D images were shown to have problems working across databases acquired with different illumination conditions. Expression variations, pose

  4. Comparison of emotion recognition from facial expression and music.

    Science.gov (United States)

    Gaspar, Tina; Labor, Marina; Jurić, Iva; Dumancić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recognizing emotions presented as facial expressions or in classical music works we conducted a survey which included 90 elementary school and 87 high school students from Osijek (Croatia). The participants had to match 8 photographs of different emotions expressed on the face and 8 pieces of classical music works with 8 offered emotions. The recognition of emotions expressed through classical music pieces was significantly less successful than the recognition of emotional facial expressions. The high school students were significantly better at recognizing facial emotions than the elementary school students, whereas girls were better than boys. The success rate in recognizing emotions from music pieces was associated with higher grades in mathematics. Basic emotions are far better recognized if presented on human faces than in music, possibly because the understanding of facial emotions is one of the oldest communication skills in human society. Female advantage in emotion recognition was selected due to the necessity of their communication with the newborns during early development. The proficiency in recognizing emotional content of music and mathematical skills probably share some general cognitive skills like attention, memory and motivation. Music pieces were differently processed in brain than facial expressions and consequently, probably differently evaluated as relevant emotional clues.

  5. Facial patterning and infant emotional expression: happiness, surprise, and fear.

    Science.gov (United States)

    Hiatt, S W; Campos, J J; Emde, R N

    1979-12-01

    Although recent studies have convincingly demonstrated that emotional expressions can be judged reliably from actor-posed facial displays, there exists little evidence that facial expressions in lifelike settings are similar to actor-posed displays, are reliable across situations designed to elicit the same emotion, or provide sufficient information to mediate consistent emotion judgments by raters. The present study therefore investigated these issues as they related to the emotions of happiness, surprise, and fear. 27 infants between 10 and 12 months of age (when emotion masking is not likely to confound results) were tested in 2 situations designed to elicit hapiness (peek-a-boo game and a collapsing toy), 2 to elicit surprise (a toy-switch and a vanishing-object task), and 2 to elicit fear (the visual cliff and the approach of a stranger. Dependent variables included changes in 28 facial response components taken from previous work using actor poses, as well as judgments of the presence of 6 discrete emotions. In addition, instrumental behaviors were used to verify with other than facial expression responses whether the predicted emotion was elicited. In contrast to previous conclusions on the subject, we found that judges were able to make all facial expression judgments reliably, even in the absence of contextual information. Support was also obtained for at least some degree of specificity of facial component response patterns, especially for happiness and surprise. Emotion judgments by raters were found to be a function of the presence of discrete facial components predicted to be linked to those emotions. Finally, almost all situations elicited blends, rather than discrete emotions.

  6. Automatic Emotional State Detection using Facial Expression Dynamic in Videos

    Directory of Open Access Journals (Sweden)

    Hongying Meng

    2014-11-01

    Full Text Available In this paper, an automatic emotion detection system is built for a computer or machine to detect the emotional state from facial expressions in human computer communication. Firstly, dynamic motion features are extracted from facial expression videos and then advanced machine learning methods for classification and regression are used to predict the emotional states. The system is evaluated on two publicly available datasets, i.e. GEMEP_FERA and AVEC2013, and satisfied performances are achieved in comparison with the baseline results provided. With this emotional state detection capability, a machine can read the facial expression of its user automatically. This technique can be integrated into applications such as smart robots, interactive games and smart surveillance systems.

  7. Evaluation of the optical flow methods on facial expression classification

    Science.gov (United States)

    Haghighat, Mohammad; Amirkabiri Razian, Masoud

    2014-03-01

    Facial expression recognition is an important issue in modern human computer interaction (HCI). In this work, the performance of optical flow in tracking facial characteristic points (FCPs) is examined and it is used as an application of facial expression classification. FCPs are extracted using active appearance model (AAM), and the features selected to the classification are the perceived movements of the FCPs and the changes in geometric distance between them. This work compares four different optical flow methods on FCP tracking: normalized cross-correlation, Lucas-Kanade, Brox, and Liu-Freeman. Nearest neighborhood rule is used for the classification. Evaluations are done on the Cohn-Kanade (CK +) database for five prototypic expressions. Experimental results show that Lucas-Kanade method outperforms the other three optical flow methods. This has been assessed based on ground truth established in CK + database.

  8. Three-year-olds' rapid facial electromyographic responses to emotional facial expressions and body postures.

    Science.gov (United States)

    Geangu, Elena; Quadrelli, Ermanno; Conte, Stefania; Croci, Emanuela; Turati, Chiara

    2016-04-01

    Rapid facial reactions (RFRs) to observed emotional expressions are proposed to be involved in a wide array of socioemotional skills, from empathy to social communication. Two of the most persuasive theoretical accounts propose RFRs to rely either on motor resonance mechanisms or on more complex mechanisms involving affective processes. Previous studies demonstrated that presentation of facial and bodily expressions can generate rapid changes in adult and school-age children's muscle activity. However, to date there is little to no evidence to suggest the existence of emotional RFRs from infancy to preschool age. To investigate whether RFRs are driven by motor mimicry or could also be a result of emotional appraisal processes, we recorded facial electromyographic (EMG) activation from the zygomaticus major and frontalis medialis muscles to presentation of static facial and bodily expressions of emotions (i.e., happiness, anger, fear, and neutral) in 3-year-old children. Results showed no specific EMG activation in response to bodily emotion expressions. However, observing others' happy faces led to increased activation of the zygomaticus major and decreased activation of the frontalis medialis, whereas observing others' angry faces elicited the opposite pattern of activation. This study suggests that RFRs are the result of complex mechanisms in which both affective processes and motor resonance may play an important role. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Gene expression profile data for mouse facial development.

    Science.gov (United States)

    Leach, Sonia M; Feng, Weiguo; Williams, Trevor

    2017-08-01

    This article contains data related to the research articles "Spatial and Temporal Analysis of Gene Expression during Growth and Fusion of the Mouse Facial Prominences" (Feng et al., 2009) [1] and "Systems Biology of facial development: contributions of ectoderm and mesenchyme" (Hooper et al., 2017 In press) [2]. Embryonic mammalian craniofacial development is a complex process involving the growth, morphogenesis, and fusion of distinct facial prominences into a functional whole. Aberrant gene regulation during this process can lead to severe craniofacial birth defects, including orofacial clefting. As a means to understand the genes involved in facial development, we had previously dissected the embryonic mouse face into distinct prominences: the mandibular, maxillary or nasal between E10.5 and E12.5. The prominences were then processed intact, or separated into ectoderm and mesenchyme layers, prior analysis of RNA expression using microarrays (Feng et al., 2009, Hooper et al., 2017 in press) [1], [2]. Here, individual gene expression profiles have been built from these datasets that illustrate the timing of gene expression in whole prominences or in the separated tissue layers. The data profiles are presented as an indexed and clickable list of the genes each linked to a graphical image of that gene׳s expression profile in the ectoderm, mesenchyme, or intact prominence. These data files will enable investigators to obtain a rapid assessment of the relative expression level of any gene on the array with respect to time, tissue, prominence, and expression trajectory.

  10. Gene expression profile data for mouse facial development

    Directory of Open Access Journals (Sweden)

    Sonia M. Leach

    2017-08-01

    Full Text Available This article contains data related to the research articles "Spatial and Temporal Analysis of Gene Expression during Growth and Fusion of the Mouse Facial Prominences" (Feng et al., 2009 [1] and “Systems Biology of facial development: contributions of ectoderm and mesenchyme” (Hooper et al., 2017 In press [2]. Embryonic mammalian craniofacial development is a complex process involving the growth, morphogenesis, and fusion of distinct facial prominences into a functional whole. Aberrant gene regulation during this process can lead to severe craniofacial birth defects, including orofacial clefting. As a means to understand the genes involved in facial development, we had previously dissected the embryonic mouse face into distinct prominences: the mandibular, maxillary or nasal between E10.5 and E12.5. The prominences were then processed intact, or separated into ectoderm and mesenchyme layers, prior analysis of RNA expression using microarrays (Feng et al., 2009, Hooper et al., 2017 in press [1,2]. Here, individual gene expression profiles have been built from these datasets that illustrate the timing of gene expression in whole prominences or in the separated tissue layers. The data profiles are presented as an indexed and clickable list of the genes each linked to a graphical image of that gene׳s expression profile in the ectoderm, mesenchyme, or intact prominence. These data files will enable investigators to obtain a rapid assessment of the relative expression level of any gene on the array with respect to time, tissue, prominence, and expression trajectory.

  11. A Survey of the Trends in Facial and Expression Recognition Databases and Methods

    OpenAIRE

    Roychowdhury, Sohini; Emmons, Michelle

    2015-01-01

    Automated facial identification and facial expression recognition have been topics of active research over the past few decades. Facial and expression recognition find applications in human-computer interfaces, subject tracking, real-time security surveillance systems and social networking. Several holistic and geometric methods have been developed to identify faces and expressions using public and local facial image databases. In this work we present the evolution in facial image data sets a...

  12. Local Directional Ternary Pattern for Facial Expression Recognition.

    Science.gov (United States)

    Ryu, Byungyong; Rivera, Adin Ramirez; Kim, Jaemyun; Chae, Oksam

    2017-07-11

    This paper presents a new face descriptor, local directional ternary pattern (LDTP), for facial expression recognition. LDTP efficiently encodes information of emotion-related features (i.e., eyes, eyebrows, upper nose, and mouth) by using the directional information and ternary pattern in order to take advantage of the robustness of edge patterns in the edge region while overcoming weaknesses of edge-based methods in smooth regions. Our proposal, unlike existing histogram-based face description methods that divide the face into several regions and sample the codes uniformly, uses a two level grid to construct the face descriptor while sampling expression-related information at different scales. We use a coarse grid for stable codes (highly related to non-expression), and a finer one for active codes (highly related to expression). This multi-level approach enables us to do a finer grain description of facial motions, while still characterizing the coarse features of the expression. Moreover, we learn the active LDTP codes from the emotionrelated facial regions. We tested our method by using persondependent and independent cross-validation schemes to evaluate the performance. We show that our approaches improve the overall accuracy of facial expression recognition on six datasets.

  13. Effect of monochromatic light on circadian rhythmic expression of clock genes in the hypothalamus of chick.

    Science.gov (United States)

    Jiang, Nan; Wang, Zixu; Cao, Jing; Dong, Yulan; Chen, Yaoxing

    2017-08-01

    To clarify the effect of monochromatic light on circadian clock gene expression in chick hypothalamus, a total 240 newly hatched chickens were reared under blue light (BL), green light (GL), red light (RL) and white light (WL), respectively. On the post-hatched day 14, 24-h profiles of seven core clock genes (cClock, cBmal1, cBmal2, cCry1, cCry2, cPer2 and cPer3) were measured at six time points (CT 0, CT 4, CT 8, CT 12, CT 16, CT 20, circadian time). We found all these clock genes expressed with a significant rhythmicity in different light wavelength groups. Meanwhile, cClock and cBmal1 showed a high level under GL, and followed a corresponding high expression of cCry1. However, RL decreased the expression levels of these genes. Be consistent with the mRNA level, CLOCK and BMAL1 proteins also showed a high level under GL. The CLOCK-like immunoreactive neurons were observed not only in the SCN, but also in the non-SCN brain region such as the nucleus anterior medialis hypothalami, the periventricularis nucleus, the paraventricular nucleus and the median eminence. All these results are consistent with the auto-regulatory circadian feedback loop, and indicate that GL may play an important role on the circadian time generation and development in the chick hypothalamus. Our results also suggest that the circadian clock in the chick hypothalamus such as non-SCN brain region were involved in the regulation of photo information. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Moving to continuous facial expression space using the MPEG-4 facial definition parameter (FDP) set

    Science.gov (United States)

    Karpouzis, Kostas; Tsapatsoulis, Nicolas; Kollias, Stefanos D.

    2000-06-01

    Research in facial expression has concluded that at least six emotions, conveyed by human faces, are universally associated with distinct expressions. Sadness, anger, joy, fear, disgust and surprise are categories of expressions that are recognizable across cultures. In this work we form a relation between the description of the universal expressions and the MPEG-4 Facial Definition Parameter Set (FDP). We also investigate the relation between the movement of basic FDPs and the parameters that describe emotion-related words according to some classical psychological studies. In particular Whissel suggested that emotions are points in a space, which seem to occupy two dimensions: activation and evaluation. We show that some of the MPEG-4 Facial Animation Parameters (FAPs), approximated by the motion of the corresponding FDPs, can be combined by means of a fuzzy rule system to estimate the activation parameter. In this way variations of the six archetypal emotions can be achieved. Moreover, Plutchik concluded that emotion terms are unevenly distributed through the space defined by dimensions like Whissel's; instead they tend to form an approximately circular pattern, called 'emotion wheel,' modeled using an angular measure. The 'emotion wheel' can be defined as a reference for creating intermediate expressions from the universal ones, by interpolating the movement of dominant FDP points between neighboring basic expressions. By exploiting the relation between the movement of the basic FDP point and the activation and angular parameters we can model more emotions than the primary ones and achieve efficient recognition in video sequences.

  15. Mere social categorization modulates identification of facial expressions of emotion.

    Science.gov (United States)

    Young, Steven G; Hugenberg, Kurt

    2010-12-01

    The ability of the human face to communicate emotional states via facial expressions is well known, and past research has established the importance and universality of emotional facial expressions. However, recent evidence has revealed that facial expressions of emotion are most accurately recognized when the perceiver and expresser are from the same cultural ingroup. The current research builds on this literature and extends this work. Specifically, we find that mere social categorization, using a minimal-group paradigm, can create an ingroup emotion-identification advantage even when the culture of the target and perceiver is held constant. Follow-up experiments show that this effect is supported by differential motivation to process ingroup versus outgroup faces and that this motivational disparity leads to more configural processing of ingroup faces than of outgroup faces. Overall, the results point to distinct processing modes for ingroup and outgroup faces, resulting in differential identification accuracy for facial expressions of emotion. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  16. Automatic Generation of Facial Expression Using Triangular Geometric Deformation

    Directory of Open Access Journals (Sweden)

    Jia-Shing Sheu

    2014-12-01

    Full Text Available This paper presents an image deformation algorithm and constructs an automatic facial expression generation system to generate new facial expressions in neutral state. After the users input the face image in a neutral state into the system, the system separates the possible facial areas and the image background by skin color segmentation. It then uses the morphological operation to remove noise and to capture the organs of facial expression, such as the eyes, mouth, eyebrow, and nose. The feature control points are labeled according to the feature points (FPs defined by MPEG-4. After the designation of the deformation expression, the system also increases the image correction points based on the obtained FP coordinates. The FPs are utilized as image deformation units by triangular segmentation. The triangle is split into two vectors. The triangle points are regarded as linear combinations of two vectors, and the coefficients of the linear combinations correspond to the triangular vectors of the original image. Next, the corresponding coordinates are obtained to complete the image correction by image interpolation technology to generate the new expression. As for the proposed deformation algorithm, 10 additional correction points are generated in the positions corresponding to the FPs obtained according to MPEG-4. Obtaining the correction points within a very short operation time is easy. Using a particular triangulation for deformation can extend the material area without narrowing the unwanted material area, thus saving the filling material operation in some areas.

  17. The accuracy of intensity ratings of emotions from facial expressions

    Directory of Open Access Journals (Sweden)

    Kostić Aleksandra P.

    2003-01-01

    Full Text Available The results of a study on the accuracy of intensity ratings of emotion from facial expressions are reported. The so far research into the field has shown that spontaneous facial expressions of basic emotions are a reliable source of information about the category of emotion. The question is raised of whether this can be true for the intensity of emotion as well and whether the accuracy of intensity ratings is dependent on the observer’s sex and vocational orientation. A total of 228 observers of both sexes and of various vocational orientations rated the emotional intensity of presented facial expressions on a scale-range from 0 to 8. The results have supported the hypothesis that spontaneous facial expressions of basic emotions do provide sufficient information about emotional intensity. The hypothesis on the interdependence between the accuracy of intensity ratings of emotion and the observer’s sex and vocational orientation has not been confirmed. However, the accuracy of intensity rating has been proved to vary with the category of the emotion presented.

  18. Categorical Perception of Emotional Facial Expressions in Preschoolers

    Science.gov (United States)

    Cheal, Jenna L.; Rutherford, M. D.

    2011-01-01

    Adults perceive emotional facial expressions categorically. In this study, we explored categorical perception in 3.5-year-olds by creating a morphed continuum of emotional faces and tested preschoolers' discrimination and identification of them. In the discrimination task, participants indicated whether two examples from the continuum "felt the…

  19. Do autistics perceive facial expressions in a piecemeal fashion?

    NARCIS (Netherlands)

    Hendriks, A.W.C.J.; Benson, P.J.; Jonkers, M.; Rietberg, S.

    2002-01-01

    Whilst people with autism process many types of visual information at a level commensurate with their age, studies have shown that they have difficulty interpreting facial expressions. One of the reasons could be that autistics suffer from weak central coherence, ie a failure to integrate parts of

  20. The first facial expression recognition and analysis challenge

    NARCIS (Netherlands)

    Valstar, Michel F.; Jiang, Bihan; Mehu, Marc; Pantic, Maja; Scherer, Klaus

    Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly

  1. Teachers' Perception Regarding Facial Expressions as an Effective Teaching Tool

    Science.gov (United States)

    Butt, Muhammad Naeem; Iqbal, Mohammad

    2011-01-01

    The major objective of the study was to explore teachers' perceptions about the importance of facial expression in the teaching-learning process. All the teachers of government secondary schools constituted the population of the study. A sample of 40 teachers, both male and female, in rural and urban areas of district Peshawar, were selected…

  2. Categorical Representation of Facial Expressions in the Infant Brain

    Science.gov (United States)

    Leppanen, Jukka M.; Richmond, Jenny; Vogel-Farley, Vanessa K.; Moulson, Margaret C.; Nelson, Charles A.

    2009-01-01

    Categorical perception, demonstrated as reduced discrimination of within-category relative to between-category differences in stimuli, has been found in a variety of perceptual domains in adults. To examine the development of categorical perception in the domain of facial expression processing, we used behavioral and event-related potential (ERP)…

  3. Attention to facial emotion expressions in children with autism

    NARCIS (Netherlands)

    Begeer, S.; Rieffe, C.J.; Meerum Terwogt, M.; Stockmann, L.

    2006-01-01

    High-functioning children in the autism spectrum are frequently noted for their impaired attention to facial expressions of emotions. In this study, we examined whether attention to emotion cues in others could be enhanced in children with autism, by varying the relevance of children's attention to

  4. Continuous emotion detection using EEG signals and facial expressions

    NARCIS (Netherlands)

    Soleymani, Mohammad; Asghari-Esfeden, Sadjad; Pantic, Maja; Fu, Yun

    Emotions play an important role in how we select and consume multimedia. Recent advances on affect detection are focused on detecting emotions continuously. In this paper, for the first time, we continuously detect valence from electroencephalogram (EEG) signals and facial expressions in response to

  5. The role of facial expression in resisting enjoyable advertisements

    NARCIS (Netherlands)

    Lewiński, P.

    2015-01-01

    This dissertation on consumer resistance to enjoyable advertisements is positioned in the areas of persuasive communication and social psychology. In this thesis, it is argued that consumers can resist persuasion by controlling their facial expressions of emotion when exposed to an advertisement.

  6. Specificity of Facial Expression Labeling Deficits in Childhood Psychopathology

    Science.gov (United States)

    Guyer, Amanda E.; McClure, Erin B.; Adler, Abby D.; Brotman, Melissa A.; Rich, Brendan A.; Kimes, Alane S.; Pine, Daniel S.; Ernst, Monique; Leibenluft, Ellen

    2007-01-01

    Background: We examined whether face-emotion labeling deficits are illness-specific or an epiphenomenon of generalized impairment in pediatric psychiatric disorders involving mood and behavioral dysregulation. Method: Two hundred fifty-two youths (7-18 years old) completed child and adult facial expression recognition subtests from the Diagnostic…

  7. A model based method for automatic facial expression recognition

    NARCIS (Netherlands)

    Kuilenburg, H. van; Wiering, M.A.; Uyl, M. den

    2006-01-01

    Automatic facial expression recognition is a research topic with interesting applications in the field of human-computer interaction, psychology and product marketing. The classification accuracy for an automatic system which uses static images as input is however largely limited by the image

  8. Perception of dynamic facial emotional expressions in adolescents with autism spectrum disorders

    NARCIS (Netherlands)

    Kessels, R.P.C.; Spee, P.S.; Hendriks, A.W.C.J.

    2010-01-01

    Previous studies have shown deficits in the perception of static emotional facial expressions in individuals with autism spectrum disorders (ASD), but results are inconclusive. Possibly, using dynamic facial stimuli expressing emotions at different levels of intensities may produce more robust

  9. Perception of dynamic facial emotional expressions in adolescents with autism spectrum disorders.

    NARCIS (Netherlands)

    Kessels, R.P.C.; Spee, P.; Hendriks, A.W.

    2010-01-01

    Previous studies have shown deficits in the perception of static emotional facial expressions in individuals with autism spectrum disorders (ASD), but results are inconclusive. Possibly, using dynamic facial stimuli expressing emotions at different levels of intensities may produce more robust

  10. Reconstructing dynamic mental models of facial expressions in prosopagnosia reveals distinct representations for identity and expression.

    Science.gov (United States)

    Richoz, Anne-Raphaëlle; Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G; Caldara, Roberto

    2015-04-01

    The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus--pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits

  11. Extreme Facial Expressions Classification Based on Reality Parameters

    Science.gov (United States)

    Rahim, Mohd Shafry Mohd; Rad, Abdolvahab Ehsani; Rehman, Amjad; Altameem, Ayman

    2014-09-01

    Extreme expressions are really type of emotional expressions that are basically stimulated through the strong emotion. An example of those extreme expression is satisfied through tears. So to be able to provide these types of features; additional elements like fluid mechanism (particle system) plus some of physics techniques like (SPH) are introduced. The fusion of facile animation with SPH exhibits promising results. Accordingly, proposed fluid technique using facial animation is the real tenor for this research to get the complex expression, like laugh, smile, cry (tears emergence) or the sadness until cry strongly, as an extreme expression classification that's happens on the human face in some cases.

  12. Interference between conscious and unconscious facial expression information.

    Science.gov (United States)

    Ye, Xing; He, Sheng; Hu, Ying; Yu, Yong Qiang; Wang, Kai

    2014-01-01

    There is ample evidence to show that many types of visual information, including emotional information, could be processed in the absence of visual awareness. For example, it has been shown that masked subliminal facial expressions can induce priming and adaptation effects. However, stimulus made invisible in different ways could be processed to different extent and have differential effects. In this study, we adopted a flanker type behavioral method to investigate whether a flanker rendered invisible through Continuous Flash Suppression (CFS) could induce a congruency effect on the discrimination of a visible target. Specifically, during the experiment, participants judged the expression (either happy or fearful) of a visible face in the presence of a nearby invisible face (with happy or fearful expression). Results show that participants were slower and less accurate in discriminating the expression of the visible face when the expression of the invisible flanker face was incongruent. Thus, facial expression information rendered invisible with CFS and presented a different spatial location could enhance or interfere with consciously processed facial expression information.

  13. Facial Expression Recognition Teaching to Preschoolers with Autism

    DEFF Research Database (Denmark)

    Christinaki, Eirini; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2013-01-01

    The recognition of facial expressions is important for the perception of emotions. Understanding emotions is essential in human communication and social interaction. Children with autism have been reported to exhibit deficits in the recognition of affective expressions. Their difficulties...... in understanding and expressing emotions lead to inappropriate behavior derived from their inability to interact adequately with other people. Those deficits seem to be rather permanent in individuals with autism so intervention tools for improving those impairments are desirable. Educational interventions...... for teaching emotion recognition from facial expressions should occur as early as possible in order to be successful and to have a positive effect. It is claimed that Serious Games can be very effective in the areas of therapy and education for children with autism. However, those computer interventions...

  14. Fashioning the Face: Sensorimotor Simulation Contributes to Facial Expression Recognition.

    Science.gov (United States)

    Wood, Adrienne; Rychlowska, Magdalena; Korb, Sebastian; Niedenthal, Paula

    2016-03-01

    When we observe a facial expression of emotion, we often mimic it. This automatic mimicry reflects underlying sensorimotor simulation that supports accurate emotion recognition. Why this is so is becoming more obvious: emotions are patterns of expressive, behavioral, physiological, and subjective feeling responses. Activation of one component can therefore automatically activate other components. When people simulate a perceived facial expression, they partially activate the corresponding emotional state in themselves, which provides a basis for inferring the underlying emotion of the expresser. We integrate recent evidence in favor of a role for sensorimotor simulation in emotion recognition. We then connect this account to a domain-general understanding of how sensory information from multiple modalities is integrated to generate perceptual predictions in the brain. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Facial expression primes and implicit regulation of negative emotion.

    Science.gov (United States)

    Yoon, HeungSik; Kim, Shin Ah; Kim, Sang Hee

    2015-06-17

    An individual's responses to emotional information are influenced not only by the emotional quality of the information, but also by the context in which the information is presented. We hypothesized that facial expressions of happiness and anger would serve as primes to modulate subjective and neural responses to subsequently presented negative information. To test this hypothesis, we conducted a functional MRI study in which the brains of healthy adults were scanned while they performed an emotion-rating task. During the task, participants viewed a series of negative and neutral photos, one at a time; each photo was presented after a picture showing a face expressing a happy, angry, or neutral emotion. Brain imaging results showed that compared with neutral primes, happy facial primes increased activation during negative emotion in the dorsal anterior cingulated cortex and the right ventrolateral prefrontal cortex, which are typically implicated in conflict detection and implicit emotion control, respectively. Conversely, relative to neutral primes, angry primes activated the right middle temporal gyrus and the left supramarginal gyrus during the experience of negative emotion. Activity in the amygdala in response to negative emotion was marginally reduced after exposure to happy primes compared with angry primes. Relative to neutral primes, angry facial primes increased the subjectively experienced intensity of negative emotion. The current study results suggest that prior exposure to facial expressions of emotions modulates the subsequent experience of negative emotion by implicitly activating the emotion-regulation system.

  16. Expressive facial animation synthesis by learning speech coarticulation and expression spaces.

    Science.gov (United States)

    Deng, Zhigang; Neumann, Ulrich; Lewis, J P; Kim, Tae-Yong; Bulut, Murtaza; Narayanan, Shrikanth

    2006-01-01

    Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A Phoneme-Independent Expression Eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and Principal Component Analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation.

  17. Concurrent development of facial identity and expression discrimination.

    Science.gov (United States)

    Dalrymple, Kirsten A; Visconti di Oleggio Castello, Matteo; Elison, Jed T; Gobbini, M Ida

    2017-01-01

    Facial identity and facial expression processing both appear to follow a protracted developmental trajectory, yet these trajectories have been studied independently and have not been directly compared. Here we investigated whether these processes develop at the same or different rates using matched identity and expression discrimination tasks. The Identity task begins with a target face that is a morph between two identities (Identity A/Identity B). After a brief delay, the target face is replaced by two choice faces: 100% Identity A and 100% Identity B. Children 5-12-years-old were asked to pick the choice face that is most similar to the target identity. The Expression task is matched in format and difficulty to the Identity task, except the targets are morphs between two expressions (Angry/Happy, or Disgust/Surprise). The same children were asked to pick the choice face with the expression that is most similar to the target expression. There were significant effects of age, with performance improving (becoming more accurate and faster) on both tasks with increasing age. Accuracy and reaction times were not significantly different across tasks and there was no significant Age x Task interaction. Thus, facial identity and facial expression discrimination appear to develop at a similar rate, with comparable improvement on both tasks from age five to twelve. Because our tasks are so closely matched in format and difficulty, they may prove useful for testing face identity and face expression processing in special populations, such as autism or prosopagnosia, where one of these abilities might be impaired.

  18. Comparison of Emotion Recognition from Facial Expression and Music

    OpenAIRE

    Gašpar, Tina; Labor, Marina; Jurić, Iva; Dumančić, Dijana; Ilakovac, Vesna; Heffer, Marija

    2011-01-01

    The recognition of basic emotions in everyday communication involves interpretation of different visual and auditory clues. The ability to recognize emotions is not clearly determined as their presentation is usually very short (micro expressions), whereas the recognition itself does not have to be a conscious process. We assumed that the recognition from facial expressions is selected over the recognition of emotions communicated through music. In order to compare the success rate in recogni...

  19. Concurrent development of facial identity and expression discrimination.

    Directory of Open Access Journals (Sweden)

    Kirsten A Dalrymple

    Full Text Available Facial identity and facial expression processing both appear to follow a protracted developmental trajectory, yet these trajectories have been studied independently and have not been directly compared. Here we investigated whether these processes develop at the same or different rates using matched identity and expression discrimination tasks. The Identity task begins with a target face that is a morph between two identities (Identity A/Identity B. After a brief delay, the target face is replaced by two choice faces: 100% Identity A and 100% Identity B. Children 5-12-years-old were asked to pick the choice face that is most similar to the target identity. The Expression task is matched in format and difficulty to the Identity task, except the targets are morphs between two expressions (Angry/Happy, or Disgust/Surprise. The same children were asked to pick the choice face with the expression that is most similar to the target expression. There were significant effects of age, with performance improving (becoming more accurate and faster on both tasks with increasing age. Accuracy and reaction times were not significantly different across tasks and there was no significant Age x Task interaction. Thus, facial identity and facial expression discrimination appear to develop at a similar rate, with comparable improvement on both tasks from age five to twelve. Because our tasks are so closely matched in format and difficulty, they may prove useful for testing face identity and face expression processing in special populations, such as autism or prosopagnosia, where one of these abilities might be impaired.

  20. Asymmetric Effect of Expression Intensity on Evaluations of Facial Attractiveness

    Directory of Open Access Journals (Sweden)

    Ryuhei Ueda

    2016-11-01

    Full Text Available Many studies have shown that facial expression influences evaluations of attractiveness, but the effect of expression intensity remains unclear. In the present study, participants rated the expression intensity and attractiveness of faces with happy, neutral, or sad expressions. Sad faces, as anticipated, were judged as less attractive than neutral and happy faces. Among happy expressions, faces with more intense expressions were considered more attractive; for sad expressions, there was no significant relationship between rating and intensity. Multiple regression analyses further demonstrated that the attractiveness of a face with a sad expression could be predicted only by its baseline attractiveness (i.e., ratings of neutral expressions. We conclude that the intensity of positive and negative expressions asymmetrically influences evaluations of the attractiveness of a face. We discuss the results in terms of emotional contagion or sympathy.

  1. Using Video Modeling to Teach Children with PDD-NOS to Respond to Facial Expressions

    Science.gov (United States)

    Axe, Judah B.; Evans, Christine J.

    2012-01-01

    Children with autism spectrum disorders often exhibit delays in responding to facial expressions, and few studies have examined teaching responding to subtle facial expressions to this population. We used video modeling to train 3 participants with PDD-NOS (age 5) to respond to eight facial expressions: approval, bored, calming, disapproval,…

  2. Facial Expression Recognition Deficits and Faulty Learning: Implications for Theoretical Models and Clinical Applications

    Science.gov (United States)

    Sheaffer, Beverly L.; Golden, Jeannie A.; Averett, Paige

    2009-01-01

    The ability to recognize facial expressions of emotion is integral in social interaction. Although the importance of facial expression recognition is reflected in increased research interest as well as in popular culture, clinicians may know little about this topic. The purpose of this article is to discuss facial expression recognition literature…

  3. The role of alternative Polyadenylation in regulation of rhythmic gene expression.

    Science.gov (United States)

    Ptitsyna, Natalia; Boughorbel, Sabri; El Anbari, Mohammed; Ptitsyn, Andrey

    2017-08-04

    Alternative transcription is common in eukaryotic cells and plays important role in regulation of cellular processes. Alternative polyadenylation results from ambiguous PolyA signals in 3' untranslated region (UTR) of a gene. Such alternative transcripts share the same coding part, but differ by a stretch of UTR that may contain important functional sites. The methodoogy of this study is based on mathematical modeling, analytical solution, and subsequent validation by datamining in multiple independent experimental data from previously published studies. In this study we propose a mathematical model that describes the population dynamics of alternatively polyadenylated transcripts in conjunction with rhythmic expression such as transcription oscillation driven by circadian or metabolic oscillators. Analysis of the model shows that alternative transcripts with different turnover rates acquire a phase shift if the transcript decay rate is different. Difference in decay rate is one of the consequences of alternative polyadenylation. Phase shift can reach values equal to half the period of oscillation, which makes alternative transcripts oscillate in abundance in counter-phase to each other. Since counter-phased transcripts share the coding part, the rate of translation becomes constant. We have analyzed a few data sets collected in circadian timeline for the occurrence of transcript behavior that fits the mathematical model. Alternative transcripts with different turnover rate create the effect of rectifier. This "molecular diode" moderates or completely eliminates oscillation of individual transcripts and stabilizes overall protein production rate. In our observation this phenomenon is very common in different tissues in plants, mice, and humans. The occurrence of counter-phased alternative transcripts is also tissue-specific and affects functions of multiple biological pathways. Accounting for this mechanism is important for understanding the natural and engineering

  4. Forming impressions: effects of facial expression and gender stereotypes.

    Science.gov (United States)

    Hack, Tay

    2014-04-01

    The present study of 138 participants explored how facial expressions and gender stereotypes influence impressions. It was predicted that images of smiling women would be evaluated more favorably on traits reflecting warmth, and that images of non-smiling men would be evaluated more favorably on traits reflecting competence. As predicted, smiling female faces were rated as more warm; however, contrary to prediction, perceived competence of male faces was not affected by facial expression. Participants' female stereotype endorsement was a significant predictor for evaluations of female faces; those who ascribed more strongly to traditional female stereotypes reported the most positive impressions of female faces displaying a smiling expression. However, a similar effect was not found for images of men; endorsement of traditional male stereotypes did not predict participants' impressions of male faces.

  5. Automatic change detection to facial expressions in adolescents

    DEFF Research Database (Denmark)

    Liu, Tongran; Xiao, Tong; Jiannong, Shi

    2016-01-01

    Adolescence is a critical period for the neurodevelopment of social-emotional processing, wherein the automatic detection of changes in facial expressions is crucial for the development of interpersonal communication. Two groups of participants (an adolescent group and an adult group) were...... in facial expressions between the two age groups. The current findings demonstrated that the adolescent group featured more negative vMMN amplitudes than the adult group in the fronto-central region during the 120–200 ms interval. During the time window of 370–450 ms, only the adult group showed better...... automatic processing on fearful faces than happy faces. The present study indicated that adolescent’s posses stronger automatic detection of changes in emotional expression relative to adults, and sheds light on the neurodevelopment of automatic processes concerning social-emotional information....

  6. Humanoid Head Face Mechanism with Expandable Facial Expressions

    Directory of Open Access Journals (Sweden)

    Wagshum Techane Asheber

    2016-02-01

    Full Text Available Recently a social robot for daily life activities is becoming more common. To this end a humanoid robot with realistic facial expression is a strong candidate for common chores. In this paper, the development of a humanoid face mechanism with a simplified system complexity to generate human like facial expression is presented. The distinctive feature of this face robot is the use of significantly fewer actuators. Only three servo motors for facial expressions and five for the rest of the head motions have been used. This leads to effectively low energy consumption, making it suitable for applications such as mobile humanoid robots. Moreover, the modular design makes it possible to have as many face appearances as needed on one structure. The mechanism allows expansion to generate more expressions without addition or alteration of components. The robot is also equipped with an audio system and camera inside each eyeball, consequently hearing and vision sensibility are utilized in localization, communication and enhancement of expression exposition processes.

  7. Facial expression influences face identity recognition during the attentional blink.

    Science.gov (United States)

    Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J

    2014-12-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

  8. Effects of Facial Expressions on Recognizing Emotions in Dance Movements

    Directory of Open Access Journals (Sweden)

    Nao Shikanai

    2011-10-01

    Full Text Available Effects of facial expressions on recognizing emotions expressed in dance movements were investigated. Dancers expressed three emotions: joy, sadness, and anger through dance movements. We used digital video cameras and a 3D motion capturing system to record and capture the movements. We then created full-video displays with an expressive face, full-video displays with an unexpressive face, stick figure displays (no face, or point-light displays (no face from these data using 3D animation software. To make point-light displays, 13 markers were attached to the body of each dancer. We examined how accurately observers were able to identify the expression that the dancers intended to create through their dance movements. Dance experienced and inexperienced observers participated in the experiment. They watched the movements and rated the compatibility of each emotion with each movement on a 5-point Likert scale. The results indicated that both experienced and inexperienced observers could identify all the emotions that dancers intended to express. Identification scores for dance movements with an expressive face were higher than for other expressions. This finding indicates that facial expressions affect the identification of emotions in dance movements, whereas only bodily expressions provide sufficient information to recognize emotions.

  9. Younger and Older Users’ Recognition of Virtual Agent Facial Expressions

    Science.gov (United States)

    Beer, Jenay M.; Smarr, Cory-Ann; Fisk, Arthur D.; Rogers, Wendy A.

    2015-01-01

    As technology advances, robots and virtual agents will be introduced into the home and healthcare settings to assist individuals, both young and old, with everyday living tasks. Understanding how users recognize an agent’s social cues is therefore imperative, especially in social interactions. Facial expression, in particular, is one of the most common non-verbal cues used to display and communicate emotion in on-screen agents (Cassell, Sullivan, Prevost, & Churchill, 2000). Age is important to consider because age-related differences in emotion recognition of human facial expression have been supported (Ruffman et al., 2008), with older adults showing a deficit for recognition of negative facial expressions. Previous work has shown that younger adults can effectively recognize facial emotions displayed by agents (Bartneck & Reichenbach, 2005; Courgeon et al. 2009; 2011; Breazeal, 2003); however, little research has compared in-depth younger and older adults’ ability to label a virtual agent’s facial emotions, an import consideration because social agents will be required to interact with users of varying ages. If such age-related differences exist for recognition of virtual agent facial expressions, we aim to understand if those age-related differences are influenced by the intensity of the emotion, dynamic formation of emotion (i.e., a neutral expression developing into an expression of emotion through motion), or the type of virtual character differing by human-likeness. Study 1 investigated the relationship between age-related differences, the implication of dynamic formation of emotion, and the role of emotion intensity in emotion recognition of the facial expressions of a virtual agent (iCat). Study 2 examined age-related differences in recognition expressed by three types of virtual characters differing by human-likeness (non-humanoid iCat, synthetic human, and human). Study 2 also investigated the role of configural and featural processing as a

  10. The facial expression of schizophrenic patients applied with infrared thermal facial image sequence

    National Research Council Canada - National Science Library

    Bo-Lin Jian; Chieh-Li Chen; Wen-Lin Chu; Min-Wei Huang

    2017-01-01

    .... Thus, this study used non-contact infrared thermal facial images (ITFIs) to analyze facial temperature changes evoked by different emotions in moderately and markedly ill schizophrenia patients...

  11. Improvements on a simple muscle-based 3D face for realistic facial expressions

    NARCIS (Netherlands)

    Bui, T.D.; Heylen, Dirk K.J.; Nijholt, Antinus; Badler, N.; Thalmann, D.

    2003-01-01

    Facial expressions play an important role in face-to-face communication. With the development of personal computers capable of rendering high quality graphics, computer facial animation has produced more and more realistic facial expressions to enrich human-computer communication. In this paper, we

  12. Multichannel Communication: The Impact of the Paralinguistic Channel on Facial Expression of Emotion.

    Science.gov (United States)

    Brideau, Linda B.; Allen, Vernon L.

    A study was undertaken to examine the impact of the paralinguistic channel on the ability to encode facial expressions of emotion. The first set of subjects, 19 encoders, were asked to encode facial expressions for five emotions (fear, sadness, anger, happiness, and disgust). The emotions were produced in three encoding conditions: facial channel…

  13. Differentiation of PC12 Cells Results in Enhanced VIP Expression and Prolonged Rhythmic Expression of Clock Genes

    DEFF Research Database (Denmark)

    Pretzmann, C.P.; Fahrenkrug, J.; Georg, B.

    2008-01-01

    To examine for circadian rhythmicity, the messenger RNA (mRNA) amount of the clock genes Per1 and Per2 was measured in undifferentiated and nerve-growth-factor-differentiated PC12 cells harvested every fourth hour. Serum shock was needed to induce circadian oscillations, which in undifferentiated......, PC12 cells seem more useful for studying mechanisms behind acquirement of rhythmicity of cell cultures than for resetting of circadian rhythm Udgivelsesdato: 2008/11...

  14. Emotion identification from facial expressions in children adopted internationally.

    Science.gov (United States)

    Hwa-Froelich, Deborah A; Matsuo, Hisako; Becker, Jenna C

    2014-11-01

    Children adopted internationally who are exposed to institutional care receive less social interaction than children reared in families. These children spend their preadoptive life with individuals from their birth country and are adopted into families who may look and interact differently. The presumed patterns of limited social stimulation and transition from ethnically similar to ethnically and culturally different social interactions may affect these children's ability to accurately identify emotions from facial expressions. Thirty-five 4-year-old children adopted from Asia and Eastern Europe by U.S. families were compared with 33 nonadopted peers on the Diagnostic Analysis of Nonverbal Accuracy, Version 2 (DANVA2) Faces subtests. Correlation and regression analyses were completed with preadoption (adoption age, foster care exposure), postadoption environment (postadoption care duration, number of siblings, socioeconomic status), and individual (chronological age, gender, language competence) variables to determine related and predictive variables. The nonadopted group demonstrated better emotion identification than children internationally adopted, but no region-of-origin differences were found. English language performance was correlated with and predicted 20% of the variance in emotion identification of facial expressions on the DANVA2. Children adopted internationally who have stronger language ability tend to be more accurate in identifying emotions from facial expressions.

  15. Facial mimicry and emotional contagion to dynamic emotional facial expressions and their influence on decoding accuracy.

    Science.gov (United States)

    Hess, U; Blairy, S

    2001-03-01

    The present study had the goal to assess whether individuals mimic and show emotional contagion in response to relatively weak and idiosyncratic dynamic facial expressions of emotions similar to those encountered in everyday life. Furthermore, the question of whether mimicry leads to emotional contagion and in turn facilitates emotion recognition was addressed. Forty-one female participants rated a series of short video clips of stimulus persons expressing anger, sadness, disgust, and happiness regarding the emotions expressed. An unobtrusive measure of emotional contagion was taken. Evidence for mimicry was found for all types of expressions. Furthermore, evidence for emotional contagion of happiness and sadness was found. Mediational analyses could not confirm any relation between mimicry and emotional contagion nor between mimicry and emotion recognition.

  16. Objectifying Facial Expressivity Assessment of Parkinson’s Patients: Preliminary Study

    Directory of Open Access Journals (Sweden)

    Peng Wu

    2014-01-01

    Full Text Available Patients with Parkinson’s disease (PD can exhibit a reduction of spontaneous facial expression, designated as “facial masking,” a symptom in which facial muscles become rigid. To improve clinical assessment of facial expressivity of PD, this work attempts to quantify the dynamic facial expressivity (facial activity of PD by automatically recognizing facial action units (AUs and estimating their intensity. Spontaneous facial expressivity was assessed by comparing 7 PD patients with 8 control participants. To voluntarily produce spontaneous facial expressions that resemble those typically triggered by emotions, six emotions (amusement, sadness, anger, disgust, surprise, and fear were elicited using movie clips. During the movie clips, physiological signals (facial electromyography (EMG and electrocardiogram (ECG and frontal face video of the participants were recorded. The participants were asked to report on their emotional states throughout the experiment. We first examined the effectiveness of the emotion manipulation by evaluating the participant’s self-reports. Disgust-induced emotions were significantly higher than the other emotions. Thus we focused on the analysis of the recorded data during watching disgust movie clips. The proposed facial expressivity assessment approach captured differences in facial expressivity between PD patients and controls. Also differences between PD patients with different progression of Parkinson’s disease have been observed.

  17. Psychometric challenges and proposed solutions when scoring facial emotion expression codes

    OpenAIRE

    Olderbak, Sally; Hildebrandt, Andrea; Pinkpank, Thomas; Sommer, Werner; Wilhelm, Oliver

    2013-01-01

    Coding of facial emotion expressions is increasingly performed by automated emotion expression scoring software; however, there is limited discussion on how best to score the resulting codes. We present a discussion of facial emotion expression theories and a review of contemporary emotion expression coding methodology. We highlight methodological challenges pertinent to scoring software-coded facial emotion expression codes and present important psychometric research questions centered on co...

  18. Active AU Based Patch Weighting for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Weicheng Xie

    2017-01-01

    Full Text Available Facial expression has many applications in human-computer interaction. Although feature extraction and selection have been well studied, the specificity of each expression variation is not fully explored in state-of-the-art works. In this work, the problem of multiclass expression recognition is converted into triplet-wise expression recognition. For each expression triplet, a new feature optimization model based on action unit (AU weighting and patch weight optimization is proposed to represent the specificity of the expression triplet. The sparse representation-based approach is then proposed to detect the active AUs of the testing sample for better generalization. The algorithm achieved competitive accuracies of 89.67% and 94.09% for the Jaffe and Cohn–Kanade (CK+ databases, respectively. Better cross-database performance has also been observed.

  19. Extraction and representation of common feature from uncertain facial expressions with cloud model.

    Science.gov (United States)

    Wang, Shuliang; Chi, Hehua; Yuan, Hanning; Geng, Jing

    2017-12-01

    Human facial expressions are key ingredient to convert an individual's innate emotion in communication. However, the variation of facial expressions affects the reliable identification of human emotions. In this paper, we present a cloud model to extract facial features for representing human emotion. First, the uncertainties in facial expression are analyzed in the context of cloud model. The feature extraction and representation algorithm is established under cloud generators. With forward cloud generator, facial expression images can be re-generated as many as we like for visually representing the extracted three features, and each feature shows different roles. The effectiveness of the computing model is tested on Japanese Female Facial Expression database. Three common features are extracted from seven facial expression images. Finally, the paper is concluded and remarked.

  20. Plain faces are more expressive: comparative study of facial colour, mobility and musculature in primates.

    Science.gov (United States)

    Santana, Sharlene E; Dobson, Seth D; Diogo, Rui

    2014-05-01

    Facial colour patterns and facial expressions are among the most important phenotypic traits that primates use during social interactions. While colour patterns provide information about the sender's identity, expressions can communicate its behavioural intentions. Extrinsic factors, including social group size, have shaped the evolution of facial coloration and mobility, but intrinsic relationships and trade-offs likely operate in their evolution as well. We hypothesize that complex facial colour patterning could reduce how salient facial expressions appear to a receiver, and thus species with highly expressive faces would have evolved uniformly coloured faces. We test this hypothesis through a phylogenetic comparative study, and explore the underlying morphological factors of facial mobility. Supporting our hypothesis, we find that species with highly expressive faces have plain facial colour patterns. The number of facial muscles does not predict facial mobility; instead, species that are larger and have a larger facial nucleus have more expressive faces. This highlights a potential trade-off between facial mobility and colour patterning in primates and reveals complex relationships between facial features during primate evolution. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  1. Comparing the Recognition of Emotional Facial Expressions in Patients with

    Directory of Open Access Journals (Sweden)

    Abdollah Ghasempour

    2014-05-01

    Full Text Available Background: Recognition of emotional facial expressions is one of the psychological factors which involve in obsessive-compulsive disorder (OCD and major depressive disorder (MDD. The aim of present study was to compare the ability of recognizing emotional facial expressions in patients with Obsessive-Compulsive Disorder and major depressive disorder. Materials and Methods: The present study is a cross-sectional and ex-post facto investigation (causal-comparative method. Forty participants (20 patients with OCD, 20 patients with MDD were selected through available sampling method from the clients referred to Tabriz Bozorgmehr clinic. Data were collected through Structured Clinical Interview and Recognition of Emotional Facial States test. The data were analyzed utilizing MANOVA. Results: The obtained results showed that there is no significant difference between groups in the mean score of recognition emotional states of surprise, sadness, happiness and fear; but groups had a significant difference in the mean score of diagnosing disgust and anger states (p<0.05. Conclusion: Patients suffering from both OCD and MDD show equal ability to recognize surprise, sadness, happiness and fear. However, the former are less competent in recognizing disgust and anger than the latter.

  2. Body actions change the appearance of facial expressions.

    Science.gov (United States)

    Fantoni, Carlo; Gerbino, Walter

    2014-01-01

    Perception, cognition, and emotion do not operate along segregated pathways; rather, their adaptive interaction is supported by various sources of evidence. For instance, the aesthetic appraisal of powerful mood inducers like music can bias the facial expression of emotions towards mood congruency. In four experiments we showed similar mood-congruency effects elicited by the comfort/discomfort of body actions. Using a novel Motor Action Mood Induction Procedure, we let participants perform comfortable/uncomfortable visually-guided reaches and tested them in a facial emotion identification task. Through the alleged mediation of motor action induced mood, action comfort enhanced the quality of the participant's global experience (a neutral face appeared happy and a slightly angry face neutral), while action discomfort made a neutral face appear angry and a slightly happy face neutral. Furthermore, uncomfortable (but not comfortable) reaching improved the sensitivity for the identification of emotional faces and reduced the identification time of facial expressions, as a possible effect of hyper-arousal from an unpleasant bodily experience.

  3. Determining the threshold for asymmetry detection in facial expressions

    Science.gov (United States)

    Hohman, Marc H.; Kim, Sang W.; Heller, Elizabeth S.; Frigerio, Alice; Heaton, James T.; Hadlock, Tessa A.

    2015-01-01

    Objective To quantify the threshold for human perception of asymmetry for eyebrow elevation, eye closure and smile, and to ascertain whether asymmetry detection thresholds and perceived severity of asymmetry differ in distinct facial zones. Study Design Online survey. Methods Photographs of a female volunteer performing eyebrow elevation, eye closure, and smile were digitally manipulated to introduce left-to-right asymmetry in 1mm increments from 0-6mm. One-hundred forty-five participants viewed these photographs using an online survey, measuring accuracy of asymmetry detection and perceived expression unnaturalness (on a scale of 1-5). Results Photographs of facial asymmetries were correctly judged as asymmetric over 90% of the time for 2mm or more of asymmetry in eyelid closure, and 3mm or more of asymmetry during smiling. Identification of eyebrow elevation asymmetry gradually rose from 23% correct to 97% correct across the range of 1-6mm of asymmetry. Greater degrees of asymmetry were ranked as significantly more unnatural across all expressions (3 tests; X2 (6, N = 145) = 405.52 to 656.27, all Pfacial weakness and may provide more objective goals for facial reanimation procedures. PMID:23900726

  4. Body actions change the appearance of facial expressions.

    Directory of Open Access Journals (Sweden)

    Carlo Fantoni

    Full Text Available Perception, cognition, and emotion do not operate along segregated pathways; rather, their adaptive interaction is supported by various sources of evidence. For instance, the aesthetic appraisal of powerful mood inducers like music can bias the facial expression of emotions towards mood congruency. In four experiments we showed similar mood-congruency effects elicited by the comfort/discomfort of body actions. Using a novel Motor Action Mood Induction Procedure, we let participants perform comfortable/uncomfortable visually-guided reaches and tested them in a facial emotion identification task. Through the alleged mediation of motor action induced mood, action comfort enhanced the quality of the participant's global experience (a neutral face appeared happy and a slightly angry face neutral, while action discomfort made a neutral face appear angry and a slightly happy face neutral. Furthermore, uncomfortable (but not comfortable reaching improved the sensitivity for the identification of emotional faces and reduced the identification time of facial expressions, as a possible effect of hyper-arousal from an unpleasant bodily experience.

  5. Measurement of distal EMG signals using a wearable device for reading facial expressions.

    Science.gov (United States)

    Gruebler, Anna; Suzuki, Kenji

    2010-01-01

    In this paper we present a quantitative analysis of electrode positions on the side of the face for facial expression recognition using facial bioelectrical signals. We show that distal electrode locations on areas of low facial mobility have a strong amplitude and are correlated to signals captured in the traditional positions on top of the facial muscles. We report on electrode position choice as well successful facial expression identification using computational methods. We also propose a wearable interface device that can detect facial bioelectrical signals distally in a continuous manner while being unobtrusive to the user. The proposed device can be worn on the side of the face and capture signals that are considered to be a mixture of facial electromyographic signals and other bioelectrical signals. Finally we show the design of an interface that can be comfortably worn by the user and makes facial expression recognition possible.

  6. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception

    DEFF Research Database (Denmark)

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan

    2015-01-01

    experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8) and happy expressions were deviant stimuli (p = 0.2), and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8) and fearful expressions were deviant stimuli (p = 0...... analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50–130 ms), the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320–450 ms), the high IQ group had greater vMMN responses than the average IQ...... group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre...

  7. The Effect of Sad Facial Expressions on Weight Judgment

    Directory of Open Access Journals (Sweden)

    Trent D Weston

    2015-04-01

    Full Text Available Although the body weight evaluation (e.g., normal or overweight of others relies on perceptual impressions, it also can be influenced by other psychosocial factors. In this study, we explored the effect of task-irrelevant emotional facial expressions on judgments of body weight and the relationship between emotion-induced weight judgment bias and other psychosocial variables including attitudes towards obese person. Forty-four participants were asked to quickly make binary body weight decisions for 960 randomized sad and neutral faces of varying weight levels presented on a computer screen. The results showed that sad facial expressions systematically decreased the decision threshold of overweight judgments for male faces. This perceptual decision bias by emotional expressions was positively correlated with the belief that being overweight is not under the control of obese persons. Our results provide experimental evidence that task-irrelevant emotional expressions can systematically change the decision threshold for weight judgments, demonstrating that sad expressions can make faces appear more overweight than they would otherwise be judged.

  8. Hierarchical Recognition Scheme for Human Facial Expression Recognition Systems

    Science.gov (United States)

    Siddiqi, Muhammad Hameed; Lee, Sungyoung; Lee, Young-Koo; Khan, Adil Mehmood; Truc, Phan Tran Ho

    2013-01-01

    Over the last decade, human facial expressions recognition (FER) has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER) system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER. PMID:24316568

  9. Facial expression training optimises viewing strategy in children and adults.

    Directory of Open Access Journals (Sweden)

    Petra M J Pollux

    Full Text Available This study investigated whether training-related improvements in facial expression categorization are facilitated by spontaneous changes in gaze behaviour in adults and nine-year old children. Four sessions of a self-paced, free-viewing training task required participants to categorize happy, sad and fear expressions with varying intensities. No instructions about eye movements were given. Eye-movements were recorded in the first and fourth training session. New faces were introduced in session four to establish transfer-effects of learning. Adults focused most on the eyes in all sessions and increased expression categorization accuracy after training coincided with a strengthening of this eye-bias in gaze allocation. In children, training-related behavioural improvements coincided with an overall shift in gaze-focus towards the eyes (resulting in more adult-like gaze-distributions and towards the mouth for happy faces in the second fixation. Gaze-distributions were not influenced by the expression intensity or by the introduction of new faces. It was proposed that training enhanced the use of a uniform, predominantly eyes-biased, gaze strategy in children in order to optimise extraction of relevant cues for discrimination between subtle facial expressions.

  10. Impaired facial emotion recognition and preserved reactivity to facial expressions in people with severe dementia.

    Science.gov (United States)

    Guaita, A; Malnati, M; Vaccaro, R; Pezzati, R; Marcionetti, J; Vitali, S F; Colombo, M

    2009-01-01

    The ability of decoding the emotional facial expressions may be early damaged in frontotemporal dementia, but relatively well preserved in the Alzheimer's disease (AD). Nevertheless, the data about the relationship of the dementia severity with the ability of recognizing the face emotions are conflicting and insufficient, mainly for the moderate-severe stage of the disease. The present study extends to the existing literature by: (1) assessing people in the moderate and severe stage of dementia, compared with people without cognitive impairment; (2) assessing not only recognition but also reactivity to the facial expression of emotion. The capability of understanding the facial emotions has been evaluated in 79 patients with dementia compared to 64 healthy elderly people. The test consisted in showing them 14 photographic representations of 7 emotions both from male and from female faces, representing happiness, sadness, fear, disgust, boredom, anger and surprise. Patients were asked to observe the face and to recognize the emotion either with a denomination or a description. Then the spontaneous reactivity to the face expressions was videotaped and classified as a congruous or incongruous reaction by two independent observers who showed a good inter-rater reliability. Of the patients, 53% with dementia recognized up to 5 emotions out of 14, while in the healthy controls this number of mean recognition raised to 8.4, a value reached by the patients who scored 16 at MMSE. The most identified emotion is happiness both for the patients and for the controls. In general, positive emotions are better recognized than the negative ones, confirming the literary data. About the reactions to face emotion stimuli, there is no significant difference for any of the face emotion between the control group and the people with dementia. These data show that patients with dementia can recognize and react to facial emotions also in the severe stage of the disease, suggesting the

  11. MicroRNA-122 modulates the rhythmic expression profile of the circadian deadenylase Nocturnin in mouse liver.

    Science.gov (United States)

    Kojima, Shihoko; Gatfield, David; Esau, Christine C; Green, Carla B

    2010-06-22

    Nocturnin is a circadian clock-regulated deadenylase thought to control mRNA expression post-transcriptionally through poly(A) tail removal. The expression of Nocturnin is robustly rhythmic in liver at both the mRNA and protein levels, and mice lacking Nocturnin are resistant to diet-induced obesity and hepatic steatosis. Here we report that Nocturnin expression is regulated by microRNA-122 (miR-122), a liver specific miRNA. We found that the 3'-untranslated region (3'-UTR) of Nocturnin mRNA harbors one putative recognition site for miR-122, and this site is conserved among mammals. Using a luciferase reporter construct with wild-type or mutant Nocturnin 3'-UTR sequence, we demonstrated that overexpression of miR-122 can down-regulate luciferase activity levels and that this effect is dependent on the presence of the putative miR-122 recognition site. Additionally, the use of an antisense oligonucleotide to knock down miR-122 in vivo resulted in significant up-regulation of both Nocturnin mRNA and protein expression in mouse liver during the night, resulting in Nocturnin rhythms with increased amplitude. Together, these data demonstrate that the normal rhythmic profile of Nocturnin expression in liver is shaped in part by miR-122. Previous studies have implicated Nocturnin and miR-122 as important post-transcriptional regulators of both lipid metabolism and circadian clock controlled gene expression in the liver. Therefore, the demonstration that miR-122 plays a role in regulating Nocturnin expression suggests that this may be an important intersection between hepatic metabolic and circadian control.

  12. MicroRNA-122 modulates the rhythmic expression profile of the circadian deadenylase Nocturnin in mouse liver.

    Directory of Open Access Journals (Sweden)

    Shihoko Kojima

    Full Text Available Nocturnin is a circadian clock-regulated deadenylase thought to control mRNA expression post-transcriptionally through poly(A tail removal. The expression of Nocturnin is robustly rhythmic in liver at both the mRNA and protein levels, and mice lacking Nocturnin are resistant to diet-induced obesity and hepatic steatosis. Here we report that Nocturnin expression is regulated by microRNA-122 (miR-122, a liver specific miRNA. We found that the 3'-untranslated region (3'-UTR of Nocturnin mRNA harbors one putative recognition site for miR-122, and this site is conserved among mammals. Using a luciferase reporter construct with wild-type or mutant Nocturnin 3'-UTR sequence, we demonstrated that overexpression of miR-122 can down-regulate luciferase activity levels and that this effect is dependent on the presence of the putative miR-122 recognition site. Additionally, the use of an antisense oligonucleotide to knock down miR-122 in vivo resulted in significant up-regulation of both Nocturnin mRNA and protein expression in mouse liver during the night, resulting in Nocturnin rhythms with increased amplitude. Together, these data demonstrate that the normal rhythmic profile of Nocturnin expression in liver is shaped in part by miR-122. Previous studies have implicated Nocturnin and miR-122 as important post-transcriptional regulators of both lipid metabolism and circadian clock controlled gene expression in the liver. Therefore, the demonstration that miR-122 plays a role in regulating Nocturnin expression suggests that this may be an important intersection between hepatic metabolic and circadian control.

  13. Intracellular high cholesterol content disorders the clock genes, apoptosis-related genes and fibrinolytic-related genes rhythmic expressions in human plaque-derived vascular smooth muscle cells.

    Science.gov (United States)

    Lin, Changpo; Tang, Xiao; Xu, Lirong; Qian, Ruizhe; Shi, Zhenyu; Wang, Lixin; Cai, Tingting; Yan, Dong; Fu, Weiguo; Guo, Daqiao

    2017-07-10

    The clock genes are involved in regulating cardiovascular functions, and their expression disorders would lead to circadian rhythm disruptions of clock-controlled genes (CCGs), resulting in atherosclerotic plaque formation and rupture. Our previous study revealed the rhythmic expression of clock genes were attenuated in human plaque-derived vascular smooth muscle cells (PVSMCs), but failed to detect the downstream CCGs expressions and the underlying molecular mechanism. In this study, we examined the difference of CCGs rhythmic expression between human normal carotid VSMCs (NVSMCs) and PVSMCs. Furthermore, we compared the cholesterol and triglycerides levels between two groups and the link to clock genes and CCGs expressions. Seven health donors' normal carotids and 19 carotid plaques yielded viable cultured NVSMCs and PVSMCs. The expression levels of target genes were measured by quantitative real-time PCR and Western-blot. The intracellular cholesterol and triglycerides levels were measured by kits. The circadian expressions of apoptosis-related genes and fibrinolytic-related genes were disordered. Besides, the cholesterol levels were significant higher in PVSMCs. After treated with cholesterol or oxidized low density lipoprotein (ox-LDL), the expressions of clock genes were inhibited; and the rhythmic expressions of clock genes, apoptosis-related genes and fibrinolytic-related genes were disturbed in NVSMCs, which were similar to PVSMCs. The results suggested that intracellular high cholesterol content of PVSMCs would lead to the disorders of clock genes and CCGs rhythmic expressions. And further studies should be conducted to demonstrate the specific molecular mechanisms involved.

  14. Fear modulates visual awareness similarly for facial and bodily expressions

    Directory of Open Access Journals (Sweden)

    Bernard M.C. Stienen

    2011-11-01

    Full Text Available BackgroundSocial interaction depends on a multitude of signals carrying information about the emotional state of others. Past research has focused on the perception of facial expressions while perception of whole body signals has only been studied recently. The relative importance of facial and bodily signals is still poorly understood. In order to better understand the relative contribution of affective signals from the face only or from the rest of the body we used a binocular rivalry experiment. This method seems to be perfectly suitable to contrast two classes of stimuli to test our processing sensitivity to either stimulus and to address the question how emotion modulates this sensitivity. We report in this paper two behavioral experiments addressing these questions.MethodIn the first experiment we directly contrasted fearful, angry and neutral bodies and faces. We always presented bodies in one eye and faces in the other simultaneously for 60 seconds and asked participants to report what they perceived. In the second experiment we focused specifically on the role of fearful expressions of faces and bodies.ResultsTaken together the two experiments show that there is no clear bias towards either the face or body when the expression of the body and face are neutral or angry. However, the perceptual dominance in favor of either the face of the body is a function of the stimulus class expressing fear.

  15. Errors in identifying and expressing emotion in facial expressions, voices, and postures unique to social anxiety.

    Science.gov (United States)

    Walker, Amy S; Nowicki, Stephen; Jones, Jeffrey; Heimann, Lisa

    2011-01-01

    The purpose of the present study was to see if 7-10-year-old socially anxious children (n = 26) made systematic errors in identifying and sending emotions in facial expressions, paralanguage, and postures as compared with the more random errors of children who were inattentive-hyperactive (n = 21). It was found that socially anxious children made more errors in identifying anger and fear in children's facial expressions and anger in adults' postures and in expressing anger in their own facial expressions than did their inattentive-hyperactive peers. Results suggest that there may be systematic difficulties specifically in visual nonverbal emotion communication that contribute to the personal and social difficulties socially anxious children experience.

  16. Misinterpretation of Facial Expressions of Emotion in Verbal Adults with Autism Spectrum Disorder

    Science.gov (United States)

    Eack, Shaun M.; Mazefsky, Carla A.; Minshew, Nancy J.

    2015-01-01

    Facial emotion perception is significantly affected in autism spectrum disorder, yet little is known about how individuals with autism spectrum disorder misinterpret facial expressions that result in their difficulty in accurately recognizing emotion in faces. This study examined facial emotion perception in 45 verbal adults with autism spectrum…

  17. Deficits in the Mimicry of Facial Expressions in Parkinson's Disease.

    Science.gov (United States)

    Livingstone, Steven R; Vezer, Esztella; McGarry, Lucy M; Lang, Anthony E; Russo, Frank A

    2016-01-01

    Humans spontaneously mimic the facial expressions of others, facilitating social interaction. This mimicking behavior may be impaired in individuals with Parkinson's disease, for whom the loss of facial movements is a clinical feature. To assess the presence of facial mimicry in patients with Parkinson's disease. Twenty-seven non-depressed patients with idiopathic Parkinson's disease and 28 age-matched controls had their facial muscles recorded with electromyography while they observed presentations of calm, happy, sad, angry, and fearful emotions. Patients exhibited reduced amplitude and delayed onset in the zygomaticus major muscle region (smiling response) following happy presentations (patients M = 0.02, 95% confidence interval [CI] -0.15 to 0.18, controls M = 0.26, CI 0.14 to 0.37, ANOVA, effect size [ES] = 0.18, p < 0.001). Although patients exhibited activation of the corrugator supercilii and medial frontalis (frowning response) following sad and fearful presentations, the frontalis response to sad presentations was attenuated relative to controls (patients M = 0.05, CI -0.08 to 0.18, controls M = 0.21, CI 0.09 to 0.34, ANOVA, ES = 0.07, p = 0.017). The amplitude of patients' zygomaticus activity in response to positive emotions was found to be negatively correlated with response times for ratings of emotional identification, suggesting a motor-behavioral link (r = -0.45, p = 0.02, two-tailed). Patients showed decreased mimicry overall, mimicking other peoples' frowns to some extent, but presenting with profoundly weakened and delayed smiles. These findings open a new avenue of inquiry into the "masked face" syndrome of PD.

  18. Genome-wide profiling of 24?hr diel rhythmicity in the water flea, Daphnia pulex: network analysis reveals rhythmic gene expression and enhances functional gene annotation

    OpenAIRE

    Rund, Samuel S. C.; Yoo, Boyoung; Alam, Camille; Green, Taryn; Stephens, Melissa T.; Zeng, Erliang; George, Gary F.; Sheppard, Aaron D.; Duffield, Giles E.; Milenkovi?, Tijana; Pfrender, Michael E.

    2016-01-01

    Background Marine and freshwater zooplankton exhibit daily rhythmic patterns of behavior and physiology which may be regulated directly by the light:dark (LD) cycle and/or a molecular circadian clock. One of the best-studied zooplankton taxa, the freshwater crustacean Daphnia, has a 24?h diel vertical migration (DVM) behavior whereby the organism travels up and down through the water column daily. DVM plays a critical role in resource tracking and the behavioral avoidance of predators and dam...

  19. Deficits in the mimicry of facial expressions in Parkinson’s disease

    OpenAIRE

    Livingstone, Steven R.; Esztella eVezer; Lucy M. McGarry; Lang, Anthony E.; Frank A. Russo

    2016-01-01

    Background: Humans spontaneously mimic the facial expressions of others, facilitating social interaction. This mimicking behaviour may be impaired in individuals with Parkinson’s disease, for whom the loss of facial movements is a clinical feature. Objective: To assess the presence of facial mimicry in patients with Parkinson’s disease.Method: Twenty-seven non-depressed patients with idiopathic Parkinson’s disease and twenty-eight age-matched controls had their facial muscles recorded with ...

  20. Behavioural responses to facial and postural expressions of emotion: An interpersonal circumplex approach.

    Science.gov (United States)

    Aan Het Rot, Marije; Enea, Violeta; Dafinoiu, Ion; Iancu, Sorina; Taftă, Steluţa A; Bărbuşelu, Mariana

    2017-11-01

    While the recognition of emotional expressions has been extensively studied, the behavioural response to these expressions has not. In the interpersonal circumplex, behaviour is defined in terms of communion and agency. In this study, we examined behavioural responses to both facial and postural expressions of emotion. We presented 101 Romanian students with facial and postural stimuli involving individuals ('targets') expressing happiness, sadness, anger, or fear. Using an interpersonal grid, participants simultaneously indicated how communal (i.e., quarrelsome or agreeable) and agentic (i.e., dominant or submissive) they would be towards people displaying these expressions. Participants were agreeable-dominant towards targets showing happy facial expressions and primarily quarrelsome towards targets with angry or fearful facial expressions. Responses to targets showing sad facial expressions were neutral on both dimensions of interpersonal behaviour. Postural versus facial expressions of happiness and anger elicited similar behavioural responses. Participants responded in a quarrelsome-submissive way to fearful postural expressions and in an agreeable way to sad postural expressions. Behavioural responses to the various facial expressions were largely comparable to those previously observed in Dutch students. Observed differences may be explained from participants' cultural background. Responses to the postural expressions largely matched responses to the facial expressions. © 2017 The British Psychological Society.

  1. FaceTube: predicting personality from facial expressions of emotion in online conversational video

    OpenAIRE

    Biel, Joan-Isaac; Teijeiro-Mosquera, Lucia; Gatica-Perez, Daniel

    2012-01-01

    The advances in automatic facial expression recognition make it possible to mine and characterize large amounts of data, opening a wide research domain on behavioral understanding. In this paper, we leverage the use of a state-of-theart facial expression recognition technology to characterize users of a popular type of online social video, conversational vlogs. First, we propose the use of several activity cues to characterize vloggers based on frame-by-frame estimates of facial expressions o...

  2. Facial expression recognition based on improved local ternary pattern and stacked auto-encoder

    Science.gov (United States)

    Wu, Yao; Qiu, Weigen

    2017-08-01

    In order to enhance the robustness of facial expression recognition, we propose a method of facial expression recognition based on improved Local Ternary Pattern (LTP) combined with Stacked Auto-Encoder (SAE). This method uses the improved LTP extraction feature, and then uses the improved depth belief network as the detector and classifier to extract the LTP feature. The combination of LTP and improved deep belief network is realized in facial expression recognition. The recognition rate on CK+ databases has improved significantly.

  3. Accurate identification of fear facial expressions predicts prosocial behavior.

    Science.gov (United States)

    Marsh, Abigail A; Kozak, Megan N; Ambady, Nalini

    2007-05-01

    The fear facial expression is a distress cue that is associated with the provision of help and prosocial behavior. Prior psychiatric studies have found deficits in the recognition of this expression by individuals with antisocial tendencies. However, no prior study has shown accuracy for recognition of fear to predict actual prosocial or antisocial behavior in an experimental setting. In 3 studies, the authors tested the prediction that individuals who recognize fear more accurately will behave more prosocially. In Study 1, participants who identified fear more accurately also donated more money and time to a victim in a classic altruism paradigm. In Studies 2 and 3, participants' ability to identify the fear expression predicted prosocial behavior in a novel task designed to control for confounding variables. In Study 3, accuracy for recognizing fear proved a better predictor of prosocial behavior than gender, mood, or scores on an empathy scale.

  4. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  5. Facial expression recognition and emotional regulation in narcolepsy with cataplexy.

    Science.gov (United States)

    Bayard, Sophie; Croisier Langenier, Muriel; Dauvilliers, Yves

    2013-04-01

    Cataplexy is pathognomonic of narcolepsy with cataplexy, and defined by a transient loss of muscle tone triggered by strong emotions. Recent researches suggest abnormal amygdala function in narcolepsy with cataplexy. Emotion treatment and emotional regulation strategies are complex functions involving cortical and limbic structures, like the amygdala. As the amygdala has been shown to play a role in facial emotion recognition, we tested the hypothesis that patients with narcolepsy with cataplexy would have impaired recognition of facial emotional expressions compared with patients affected with central hypersomnia without cataplexy and healthy controls. We also aimed to determine whether cataplexy modulates emotional regulation strategies. Emotional intensity, arousal and valence ratings on Ekman faces displaying happiness, surprise, fear, anger, disgust, sadness and neutral expressions of 21 drug-free patients with narcolepsy with cataplexy were compared with 23 drug-free sex-, age- and intellectual level-matched adult patients with hypersomnia without cataplexy and 21 healthy controls. All participants underwent polysomnography recording and multiple sleep latency tests, and completed depression, anxiety and emotional regulation questionnaires. Performance of patients with narcolepsy with cataplexy did not differ from patients with hypersomnia without cataplexy or healthy controls on both intensity rating of each emotion on its prototypical label and mean ratings for valence and arousal. Moreover, patients with narcolepsy with cataplexy did not use different emotional regulation strategies. The level of depressive and anxious symptoms in narcolepsy with cataplexy did not differ from the other groups. Our results demonstrate that narcolepsy with cataplexy accurately perceives and discriminates facial emotions, and regulates emotions normally. The absence of alteration of perceived affective valence remains a major clinical interest in narcolepsy with cataplexy

  6. Shared Gaussian Process Latent Variable Model for Multi-view Facial Expression Recognition

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    Facial-expression data often appear in multiple views either due to head-movements or the camera position. Existing methods for multi-view facial expression recognition perform classification of the target expressions either by using classifiers learned separately for each view or by using a single

  7. Developmental Changes in Infants' Categorization of Anger and Disgust Facial Expressions

    Science.gov (United States)

    Ruba, Ashley L.; Johnson, Kristin M.; Harris, Lasana T.; Wilbourn, Makeba Parramore

    2017-01-01

    For decades, scholars have examined how children first recognize emotional facial expressions. This research has found that infants younger than 10 months can discriminate negative, within-valence facial expressions in looking time tasks, and children older than 24 months struggle to categorize these expressions in labeling and free-sort tasks.…

  8. Discriminative shared Gaussian processes for multi-view and view-invariant facial expression recognition

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    Images of facial expressions are often captured from various views as a result of either head movements or variable camera position. Existing methods for multiview and/or view-invariant facial expression recognition typically perform classification of the observed expression using either classifiers

  9. View-constrained latent variable model for multi-view facial expression classification

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2014-01-01

    We propose a view-constrained latent variable model for multi-view facial expression classification. In this model, we first learn a discriminative manifold shared by multiple views of facial expressions, followed by the expression classification in the shared manifold. For learning, we use the

  10. Predicting Emotions in Facial Expressions from the Annotations in Naturally Occurring First Encounters

    DEFF Research Database (Denmark)

    Navarretta, Costanza

    2014-01-01

    to identify emotions in facial expressions. In the classification experiments, we test to what extent emotions expressed in naturally-occurring conversations can be identified automatically by a classifier trained on the manual annotations of the shape of facial expressions and co-occurring speech tokens. We...

  11. Discrimination and recognition of facial expressions of emotion and their links with voluntary control of facial musculature in Parkinson's disease.

    Science.gov (United States)

    Marneweck, Michelle; Palermo, Romina; Hammond, Geoff

    2014-11-01

    To explore perception of facial expressions of emotion and its link with voluntary facial musculature control in Parkinson's disease (PD). We investigated in 2 sets of experiments in PD patients and healthy controls the perceptual ability to discriminate (a) graded intensities of emotional from neutral expressions, (b) graded intensities of the same emotional expressions, (c) full-blown discrepant emotional expressions from 2 similar expressions and the more complex recognition ability to label full-blown emotional expressions. We tested an embodied simulationist account of emotion perception in PD, which predicts a link between the ability to perceive emotional expressions and facial musculature control. We also explored the contribution of the ability to extract facial information (besides emotion) to emotion perception in PD. Those with PD were, as a group, impaired relative to controls (with large effect sizes) in all measures of discrimination and recognition of emotional expressions, although some patients performed as well as the best performing controls. In support of embodied simulation, discrimination and recognition of emotional expressions correlated positively with voluntary control of facial musculature (after partialing out disease severity and age). Patients were also impaired at extracting information other than emotion from faces, specifically discriminating and recognizing identity from faces (with large effect sizes); identity discrimination correlated positively with emotion discrimination and recognition but not with voluntary facial musculature control (after partialing out disease severity and age). The results indicate that impaired sensory and sensorimotor processes, which are a function of disease severity, affect emotion perception in PD. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  12. Face in profile view reduces perceived facial expression intensity: an eye-tracking study.

    Science.gov (United States)

    Guo, Kun; Shaw, Heather

    2015-02-01

    Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Symmetrical and Asymmetrical Interactions between Facial Expressions and Gender Information in Face Perception.

    Science.gov (United States)

    Liu, Chengwei; Liu, Ying; Iqbal, Zahida; Li, Wenhui; Lv, Bo; Jiang, Zhongqing

    2017-01-01

    To investigate the interaction between facial expressions and facial gender information during face perception, the present study matched the intensities of the two types of information in face images and then adopted the orthogonal condition of the Garner Paradigm to present the images to participants who were required to judge the gender and expression of the faces; the gender and expression presentations were varied orthogonally. Gender and expression processing displayed a mutual interaction. On the one hand, the judgment of angry expressions occurred faster when presented with male facial images; on the other hand, the classification of the female gender occurred faster when presented with a happy facial expression than when presented with an angry facial expression. According to the evoked-related potential results, the expression classification was influenced by gender during the face structural processing stage (as indexed by N170), which indicates the promotion or interference of facial gender with the coding of facial expression features. However, gender processing was affected by facial expressions in more stages, including the early (P1) and late (LPC) stages of perceptual processing, reflecting that emotional expression influences gender processing mainly by directing attention.

  14. Contributions of feature shapes and surface cues to the recognition of facial expressions.

    Science.gov (United States)

    Sormaz, Mladen; Young, Andrew W; Andrews, Timothy J

    2016-10-01

    Theoretical accounts of face processing often emphasise feature shapes as the primary visual cue to the recognition of facial expressions. However, changes in facial expression also affect the surface properties of the face. In this study, we investigated whether this surface information can also be used in the recognition of facial expression. First, participants identified facial expressions (fear, anger, disgust, sadness, happiness) from images that were manipulated such that they varied mainly in shape or mainly in surface properties. We found that the categorization of facial expression is possible in either type of image, but that different expressions are relatively dependent on surface or shape properties. Next, we investigated the relative contributions of shape and surface information to the categorization of facial expressions. This employed a complementary method that involved combining the surface properties of one expression with the shape properties from a different expression. Our results showed that the categorization of facial expressions in these hybrid images was equally dependent on the surface and shape properties of the image. Together, these findings provide a direct demonstration that both feature shape and surface information make significant contributions to the recognition of facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Puckering and Blowing Facial Expressions in People With Facial Movement Disorders

    Science.gov (United States)

    Denlinger, Rachel L; VanSwearingen, Jessie M; Cohn, Jeffrey F; Schmidt, Karen L

    2008-01-01

    Background and Purpose: People with facial movement disorders are instructed to perform various facial movements as part of their physical therapy rehabilitation. A difference in the movement of the orbicularis oris muscle has been demonstrated among people without facial nerve impairments when instructed to “pucker your lips” and to “blow, as if blowing out a candle.” The objective of this study was to determine whether the within-subject difference between “pucker your lips” and “blow, as if blowing out a candle” found in people without facial nerve impairments is present in people with facial movement disorders. Subjects and Methods: People (N=68) with unilateral facial movement disorders were observed as they produced puckering and blowing movements. Automated facial image analysis of both puckering and blowing was used to determine the difference between facial actions for the following movement variables: maximum speed, amplitude, duration, and corresponding asymmetry. Results: There was a difference between the amplitudes of movement for puckering and blowing. “Blow, as if blowing out a candle” produced a larger amplitude of movement. Discussion and Conclusion: The findings demonstrate that puckering and blowing movements in people with facial movement disorders differ in a manner that is consistent with differences found in people who are healthy. This information may be useful in the assessment of and intervention for facial movement disorders affecting the lower face. PMID:18617578

  16. The facial expression of schizophrenic patients applied with infrared thermal facial image sequence.

    Science.gov (United States)

    Jian, Bo-Lin; Chen, Chieh-Li; Chu, Wen-Lin; Huang, Min-Wei

    2017-06-24

    Schizophrenia is a neurological disease characterized by alterations to patients' cognitive functions and emotional expressions. Relevant studies often use magnetic resonance imaging (MRI) of the brain to explore structural differences and responsiveness within brain regions. However, as this technique is expensive and commonly induces claustrophobia, it is frequently refused by patients. Thus, this study used non-contact infrared thermal facial images (ITFIs) to analyze facial temperature changes evoked by different emotions in moderately and markedly ill schizophrenia patients. Schizophrenia is an emotion-related disorder, and images eliciting different types of emotions were selected from the international affective picture system (IAPS) and presented to subjects during ITFI collection. ITFIs were aligned using affine registration, and the changes induced by small irregular head movements were corrected. The average temperatures from the forehead, nose, mouth, left cheek, and right cheek were calculated, and continuous temperature changes were used as features. After performing dimensionality reduction and noise removal using the component analysis method, multivariate analysis of variance and the Support Vector Machine (SVM) classification algorithm were used to identify moderately and markedly ill schizophrenia patients. Analysis of five facial areas indicated significant temperature changes in the forehead and nose upon exposure to various emotional stimuli and in the right cheek upon evocation of high valence low arousal (HVLA) stimuli. The most significant P-value (lower than 0.001) was obtained in the forehead area upon evocation of disgust. Finally, when the features of forehead temperature changes in response to low valence high arousal (LVHA) were reduced to 9 using dimensionality reduction and noise removal, the identification rate was as high as 94.3%. Our results show that features obtained in the forehead, nose, and right cheek significantly

  17. Rhythmic Pressure Waves Induce Mucin5AC Expression via an EGFR-Mediated Signaling Pathway in Human Airway Epithelial Cells

    Science.gov (United States)

    Liu, Chunyi; Li, Qi; Kolosov, Victor P.; Perelman, Juliy M.

    2013-01-01

    Rhythmic pressure waves (RPW), mimicking the mechanical forces generated during normal breathing, play a key role in airway surface liquid (ASL) homeostasis. As a major component of ASL, we speculated that the mucin5AC (MUC5AC) expression must also be regulated by RPW. However, fewer researches have focused on this question. Therefore, our aim was to test the effect and mechanism of RPW on MUC5AC expression in cultured human bronchial epithelial cells. Compared with the relevant controls, the transcriptional level of MUC5AC and the protein expressions of MUC5AC, the phospho-epidermal growth factor receptor (p-EGFR), phospho-extracellular signal-related kinase (p-ERK), and phospho-Akt (p-Akt) were all significantly increased after mechanical stimulation. However, this effect could be significantly attenuated by transfecting with EGFR-siRNA. Similarly, pretreating with the inhibitor of ERK or phosphatidylinositol 3-kinases (PI3K)/Akt separately or jointly also significantly reduced MUC5AC expression. Collectively, these results indicate that RPW modulate MUC5AC expression via the EGFR-PI3K-Akt/ERK-signaling pathway in human bronchial epithelial cells. PMID:23768102

  18. Capturing Physiology of Emotion along Facial Muscles: A Method of Distinguishing Feigned from Involuntary Expressions

    Science.gov (United States)

    Khan, Masood Mehmood; Ward, Robert D.; Ingleby, Michael

    The ability to distinguish feigned from involuntary expressions of emotions could help in the investigation and treatment of neuropsychiatric and affective disorders and in the detection of malingering. This work investigates differences in emotion-specific patterns of thermal variations along the major facial muscles. Using experimental data extracted from 156 images, we attempted to classify patterns of emotion-specific thermal variations into neutral, and voluntary and involuntary expressions of positive and negative emotive states. Initial results suggest (i) each facial muscle exhibits a unique thermal response to various emotive states; (ii) the pattern of thermal variances along the facial muscles may assist in classifying voluntary and involuntary facial expressions; and (iii) facial skin temperature measurements along the major facial muscles may be used in automated emotion assessment.

  19. Decoding facial expressions based on face-selective and motion-sensitive areas.

    Science.gov (United States)

    Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin

    2017-06-01

    Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  20. Predicting advertising effectiveness by facial expressions in response to amusing persuasive stimuli

    NARCIS (Netherlands)

    Lewinski, P.; Fransen, M.L.; Tan, E.S.H.

    2014-01-01

    We present a psychophysiological study of facial expressions of happiness (FEH) produced by advertisements using the FaceReader system (Noldus, 2013) for automatic analysis of facial expressions of basic emotions (FEBE; Ekman, 1972). FaceReader scores were associated with self-reports of the

  1. The Relationship between Processing Facial Identity and Emotional Expression in 8-Month-Old Infants

    Science.gov (United States)

    Schwarzer, Gudrun; Jovanovic, Bianca

    2010-01-01

    In Experiment 1, it was investigated whether infants process facial identity and emotional expression independently or in conjunction with one another. Eight-month-old infants were habituated to two upright or two inverted faces varying in facial identity and emotional expression. Infants were tested with a habituation face, a switch face, and a…

  2. Evaluating Posed and Evoked Facial Expressions of Emotion from Adults with Autism Spectrum Disorder

    Science.gov (United States)

    Faso, Daniel J.; Sasson, Noah J.; Pinkham, Amy E.

    2015-01-01

    Though many studies have examined facial affect perception by individuals with autism spectrum disorder (ASD), little research has investigated how facial expressivity in ASD is perceived by others. Here, naïve female observers (n = 38) judged the intensity, naturalness and emotional category of expressions produced by adults with ASD (n = 6) and…

  3. Web-based Visualisation of Head Pose and Facial Expressions Changes:

    DEFF Research Database (Denmark)

    Kalliatakis, Grigorios; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2016-01-01

    and accurately estimate head pose changes in unconstrained environment. In order to complete the secondary process of recognising four universal dominant facial expressions (happiness, anger, sadness and surprise), emotion recognition via facial expressions (ERFE) was adopted. After that, a lightweight data...

  4. Mu desynchronization during observation and execution of facial expressions in 30-month-old children

    Directory of Open Access Journals (Sweden)

    Holly Rayson

    2016-06-01

    Full Text Available Simulation theories propose that observing another’s facial expression activates sensorimotor representations involved in the execution of that expression, facilitating recognition processes. The mirror neuron system (MNS is a potential mechanism underlying simulation of facial expressions, with like neural processes activated both during observation and performance. Research with monkeys and adult humans supports this proposal, but so far there have been no investigations of facial MNS activity early in human development. The current study used electroencephalography (EEG to explore mu rhythm desynchronization, an index of MNS activity, in 30-month-old children as they observed videos of dynamic emotional and non-emotional facial expressions, as well as scrambled versions of the same videos. We found significant mu desynchronization in central regions during observation and execution of both emotional and non-emotional facial expressions, which was right-lateralized for emotional and bilateral for non-emotional expressions during observation. These findings support previous research suggesting movement simulation during observation of facial expressions, and are the first to provide evidence for sensorimotor activation during observation of facial expressions, consistent with a functioning facial MNS at an early stage of human development.

  5. The BOLD signal in the amygdala does not differentiate between dynamic facial expressions

    NARCIS (Netherlands)

    van der Gaag, Christiaan; Minderaa, Ruud B.; Keysers, Christian

    The amygdala has been considered to be essential for recognizing fear in other people's facial expressions. Recent studies shed doubt on this interpretation. Here we used movies of facial expressions instead of static photographs to investigate the putative fear selectivity of the amygdala using

  6. Recognition of Facial Expressions and Prosodic Cues with Graded Emotional Intensities in Adults with Asperger Syndrome

    Science.gov (United States)

    Doi, Hirokazu; Fujisawa, Takashi X.; Kanai, Chieko; Ohta, Haruhisa; Yokoi, Hideki; Iwanami, Akira; Kato, Nobumasa; Shinohara, Kazuyuki

    2013-01-01

    This study investigated the ability of adults with Asperger syndrome to recognize emotional categories of facial expressions and emotional prosodies with graded emotional intensities. The individuals with Asperger syndrome showed poorer recognition performance for angry and sad expressions from both facial and vocal information. The group…

  7. Do Dynamic Facial Expressions Convey Emotions to Children Better than Do Static Ones?

    Science.gov (United States)

    Widen, Sherri C.; Russell, James A.

    2015-01-01

    Past research has shown that children recognize emotions from facial expressions poorly and improve only gradually with age, but the stimuli in such studies have been static faces. Because dynamic faces include more information, it may well be that children more readily recognize emotions from dynamic facial expressions. The current study of…

  8. Preschooler's Faces in Spontaneous Emotional Contexts--How Well Do They Match Adult Facial Expression Prototypes?

    Science.gov (United States)

    Gaspar, Augusta; Esteves, Francisco G.

    2012-01-01

    Prototypical facial expressions of emotion, also known as universal facial expressions, are the underpinnings of most research concerning recognition of emotions in both adults and children. Data on natural occurrences of these prototypes in natural emotional contexts are rare and difficult to obtain in adults. By recording naturalistic…

  9. Effectiveness of Teaching Naming Facial Expression to Children with Autism via Video Modeling

    Science.gov (United States)

    Akmanoglu, Nurgul

    2015-01-01

    This study aims to examine the effectiveness of teaching naming emotional facial expression via video modeling to children with autism. Teaching the naming of emotions (happy, sad, scared, disgusted, surprised, feeling physical pain, and bored) was made by creating situations that lead to the emergence of facial expressions to children…

  10. Does Gaze Direction Modulate Facial Expression Processing in Children with Autism Spectrum Disorder?

    Science.gov (United States)

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent…

  11. Unobtrusive multimodal emotion detection in adaptive interfaces: speech and facial expressions

    NARCIS (Netherlands)

    Truong, K.P.; Leeuwen, D.A. van; Neerincx, M.A.

    2007-01-01

    Two unobtrusive modalities for automatic emotion recognition are discussed: speech and facial expressions. First, an overview is given of emotion recognition studies based on a combination of speech and facial expressions. We will identify difficulties concerning data collection, data fusion, system

  12. Spontaneous Facial Expressions in Congenitally Blind and Sighted Children Aged 8-11.

    Science.gov (United States)

    Galati, Dario; Sini, Barbara; Schmidt, Susanne; Tinti, Carla

    2003-01-01

    This study found that the emotional facial expressions of 10 congenitally blind and 10 sighted children, ages 8-11, were similar. However, the frequency of certain facial movements was higher in the blind children than in the sighted children, and social influences were evident only in the expressions of the sighted children, who often masked…

  13. Brief Report: Representational Momentum for Dynamic Facial Expressions in Pervasive Developmental Disorder

    Science.gov (United States)

    Uono, Shota; Sato, Wataru; Toichi, Motomi

    2010-01-01

    Individuals with pervasive developmental disorder (PDD) have difficulty with social communication via emotional facial expressions, but behavioral studies involving static images have reported inconsistent findings about emotion recognition. We investigated whether dynamic presentation of facial expression would enhance subjective perception of…

  14. Reduced capacity in automatic processing of facial expression in restrictive anorexia nervosa and obesity

    NARCIS (Netherlands)

    Cserjesi, Renata; Vermeulen, Nicolas; Lenard, Laszlo; Luminet, Olivier

    2011-01-01

    There is growing evidence that disordered eating is associated with facial expression recognition and emotion processing problems. In this study, we investigated the question of whether anorexia and obesity occur on a continuum of attention bias towards negative facial expressions in comparison with

  15. Can Neurotypical Individuals Read Autistic Facial Expressions? Atypical Production of Emotional Facial Expressions in Autism Spectrum Disorders.

    Science.gov (United States)

    Brewer, Rebecca; Biotti, Federica; Catmur, Caroline; Press, Clare; Happé, Francesca; Cook, Richard; Bird, Geoffrey

    2016-02-01

    The difficulties encountered by individuals with autism spectrum disorder (ASD) when interacting with neurotypical (NT, i.e. nonautistic) individuals are usually attributed to failure to recognize the emotions and mental states of their NT interaction partner. It is also possible, however, that at least some of the difficulty is due to a failure of NT individuals to read the mental and emotional states of ASD interaction partners. Previous research has frequently observed deficits of typical facial emotion recognition in individuals with ASD, suggesting atypical representations of emotional expressions. Relatively little research, however, has investigated the ability of individuals with ASD to produce recognizable emotional expressions, and thus, whether NT individuals can recognize autistic emotional expressions. The few studies which have investigated this have used only NT observers, making it impossible to determine whether atypical representations are shared among individuals with ASD, or idiosyncratic. This study investigated NT and ASD participants' ability to recognize emotional expressions produced by NT and ASD posers. Three posing conditions were included, to determine whether potential group differences are due to atypical cognitive representations of emotion, impaired understanding of the communicative value of expressions, or poor proprioceptive feedback. Results indicated that ASD expressions were recognized less well than NT expressions, and that this is likely due to a genuine deficit in the representation of typical emotional expressions in this population. Further, ASD expressions were equally poorly recognized by NT individuals and those with ASD, implicating idiosyncratic, rather than common, atypical representations of emotional expressions in ASD. © 2015 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research.

  16. Convolutional neural networks with balanced batches for facial expressions recognition

    Science.gov (United States)

    Battini Sönmez, Elena; Cangelosi, Angelo

    2017-03-01

    This paper considers the issue of fully automatic emotion classification on 2D faces. In spite of the great effort done in recent years, traditional machine learning approaches based on hand-crafted feature extraction followed by the classification stage failed to develop a real-time automatic facial expression recognition system. The proposed architecture uses Convolutional Neural Networks (CNN), which are built as a collection of interconnected processing elements to simulate the brain of human beings. The basic idea of CNNs is to learn a hierarchical representation of the input data, which results in a better classification performance. In this work we present a block-based CNN algorithm, which uses noise, as data augmentation technique, and builds batches with a balanced number of samples per class. The proposed architecture is a very simple yet powerful CNN, which can yield state-of-the-art accuracy on the very competitive benchmark algorithm of the Extended Cohn Kanade database.

  17. Paedomorphic facial expressions give dogs a selective advantage.

    Directory of Open Access Journals (Sweden)

    Bridget M Waller

    Full Text Available How wolves were first domesticated is unknown. One hypothesis suggests that wolves underwent a process of self-domestication by tolerating human presence and taking advantage of scavenging possibilities. The puppy-like physical and behavioural traits seen in dogs are thought to have evolved later, as a byproduct of selection against aggression. Using speed of selection from rehoming shelters as a proxy for artificial selection, we tested whether paedomorphic features give dogs a selective advantage in their current environment. Dogs who exhibited facial expressions that enhance their neonatal appearance were preferentially selected by humans. Thus, early domestication of wolves may have occurred not only as wolf populations became tamer, but also as they exploited human preferences for paedomorphic characteristics. These findings, therefore, add to our understanding of early dog domestication as a complex co-evolutionary process.

  18. Psychopathic traits affect the visual exploration of facial expressions.

    Science.gov (United States)

    Boll, Sabrina; Gamer, Matthias

    2016-05-01

    Deficits in emotional reactivity and recognition have been reported in psychopathy. Impaired attention to the eyes along with amygdala malfunctions may underlie these problems. Here, we investigated how different facets of psychopathy modulate the visual exploration of facial expressions by assessing personality traits in a sample of healthy young adults using an eye-tracking based face perception task. Fearless Dominance (the interpersonal-emotional facet of psychopathy) and Coldheartedness scores predicted reduced face exploration consistent with findings on lowered emotional reactivity in psychopathy. Moreover, participants high on the social deviance facet of psychopathy ('Self-Centered Impulsivity') showed a reduced bias to shift attention towards the eyes. Our data suggest that facets of psychopathy modulate face processing in healthy individuals and reveal possible attentional mechanisms which might be responsible for the severe impairments of social perception and behavior observed in psychopathy. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. Emotional Verbalization and Identification of Facial Expressions in Teenagers’ Communication

    Directory of Open Access Journals (Sweden)

    I. S. Ivanova

    2013-01-01

    Full Text Available The paper emphasizes the need for studying the subjective effectiveness criteria of interpersonal communication and importance of effective communication for personality development in adolescence. The problemof undeveloped representation of positive emotions in communication process is discussed. Both the identification and verbalization of emotions are regarded by the author as the basic communication skills. The experimental data regarding the longitude and age levels are described, the gender differences in identification and verbalization of emotions considered. The outcomes of experimental study demonstrate that the accuracy of facial emotional expressions of teenage boys and girls changes at different rates. The prospects of defining the age norms for identification and verbalization of emotions are analyzed.

  20. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults.

    Science.gov (United States)

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development-The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions-angry, fearful, sad, happy, surprised, and disgusted-and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  1. A spatiotemporal feature-based approach for facial expression recognition from depth video

    Science.gov (United States)

    Uddin, Md. Zia

    2015-07-01

    In this paper, a novel spatiotemporal feature-based method is proposed to recognize facial expressions from depth video. Independent Component Analysis (ICA) spatial features of the depth faces of facial expressions are first augmented with the optical flow motion features. Then, the augmented features are enhanced by Fisher Linear Discriminant Analysis (FLDA) to make them robust. The features are then combined with on Hidden Markov Models (HMMs) to model different facial expressions that are later used to recognize appropriate expression from a test expression depth video. The experimental results show superior performance of the proposed approach over the conventional methods.

  2. Faces and bodies: perception and mimicry of emotionally congruent and incongruent facial and bodily expressions

    Directory of Open Access Journals (Sweden)

    Mariska eKret

    2013-02-01

    Full Text Available Traditional emotion theories stress the importance of the face in the expression of emotions but bodily expressions are becoming increasingly important. Here we tested the hypothesis that similar physiological responses can be evoked by observing emotional face and body signals and that the reaction to angry signals is amplified in anxious individuals. We designed three experiments in which participants categorized emotional expressions from isolated facial and bodily expressions and from emotionally congruent and incongruent face-body compounds. Participants’ fixations were measured and their pupil size recorded with eye-tracking equipment, and their facial reactions measured with electromyography (EMG. The behavioral results support our prediction that the recognition of a facial expression is improved in the context of a matching posture and importantly, also vice versa. From their facial expression, it appeared that observers acted with signs of negative emotionality (increased corrugator activity to angry and fearful facial expressions and with positive emotionality (increased zygomaticus to happy facial expressions. What we predicted and found, was that angry and fearful cues from the face or the body, attracted more attention than happy cues. We further observed that responses evoked by angry cues were amplified in individuals with high anxiety scores. In sum, we show that people process bodily expressions of emotion in a similar fashion as facial expressions and that the congruency between the emotional signals from the face and body ameliorates the recognition of the emotion.

  3. Genomic Analysis Reveals Contrasting PIFq Contribution to Diurnal Rhythmic Gene Expression in PIF-Induced and -Repressed Genes

    Science.gov (United States)

    Martin, Guiomar; Soy, Judit; Monte, Elena

    2016-01-01

    Members of the PIF quartet (PIFq; PIF1, PIF3, PIF4, and PIF5) collectively contribute to induce growth in Arabidopsis seedlings under short day (SD) conditions, specifically promoting elongation at dawn. Their action involves the direct regulation of growth-related and hormone-associated genes. However, a comprehensive definition of the PIFq-regulated transcriptome under SD is still lacking. We have recently shown that SD and free-running (LL) conditions correspond to “growth” and “no growth” conditions, respectively, correlating with greater abundance of PIF protein in SD. Here, we present a genomic analysis whereby we first define SD-regulated genes at dawn compared to LL in the wild type, followed by identification of those SD-regulated genes whose expression depends on the presence of PIFq. By using this sequential strategy, we have identified 349 PIF/SD-regulated genes, approximately 55% induced and 42% repressed by both SD and PIFq. Comparison with available databases indicates that PIF/SD-induced and PIF/SD-repressed sets are differently phased at dawn and mid-morning, respectively. In addition, we found that whereas rhythmicity of the PIF/SD-induced gene set is lost in LL, most PIF/SD-repressed genes keep their rhythmicity in LL, suggesting differential regulation of both gene sets by the circadian clock. Moreover, we also uncovered distinct overrepresented functions in the induced and repressed gene sets, in accord with previous studies in other examined PIF-regulated processes. Interestingly, promoter analyses showed that, whereas PIF/SD-induced genes are enriched in direct PIF targets, PIF/SD-repressed genes are mostly indirectly regulated by the PIFs and might be more enriched in ABA-regulated genes. PMID:27458465

  4. Sex differences in facial emotion recognition across varying expression intensity levels from videos.

    Science.gov (United States)

    Wingenbach, Tanja S H; Ashwin, Chris; Brosnan, Mark

    2018-01-01

    There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or 'extreme' examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations.

  5. Sex differences in facial emotion recognition across varying expression intensity levels from videos

    Science.gov (United States)

    2018-01-01

    There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or ‘extreme’ examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations. PMID:29293674

  6. Can we distinguish emotions from faces? Investigation of implicit and explicit processes of peak facial expressions

    Directory of Open Access Journals (Sweden)

    Yanmei Wang

    2016-08-01

    Full Text Available Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes

  7. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions.

    Science.gov (United States)

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the

  8. Electromyographic Responses to Emotional Facial Expressions in 6-7 Year Olds with Autism Spectrum Disorders

    Science.gov (United States)

    Deschamps, P. K. H.; Coppes, L.; Kenemans, J. L.; Schutter, D. J. L. G.; Matthys, W.

    2015-01-01

    This study aimed to examine facial mimicry in 6-7 year old children with autism spectrum disorder (ASD) and to explore whether facial mimicry was related to the severity of impairment in social responsiveness. Facial electromyographic activity in response to angry, fearful, sad and happy facial expressions was recorded in twenty 6-7 year old…

  9. Empathy, but not mimicry restriction, influences the recognition of change in emotional facial expressions.

    Science.gov (United States)

    Kosonogov, Vladimir; Titova, Alisa; Vorobyeva, Elena

    2015-01-01

    The current study addressed the hypothesis that empathy and the restriction of facial muscles of observers can influence recognition of emotional facial expressions. A sample of 74 participants recognized the subjective onset of emotional facial expressions (anger, disgust, fear, happiness, sadness, surprise, and neutral) in a series of morphed face photographs showing a gradual change (frame by frame) from one expression to another. The high-empathy (as measured by the Empathy Quotient) participants recognized emotional facial expressions at earlier photographs from the series than did low-empathy ones, but there was no difference in the exploration time. Restriction of facial muscles of observers (with plasters and a stick in mouth) did not influence the responses. We discuss these findings in the context of the embodied simulation theory and previous data on empathy.

  10. The Child Affective Facial Expression (CAFE Set: Validity and Reliability from Untrained Adults

    Directory of Open Access Journals (Sweden)

    Vanessa eLoBue

    2015-01-01

    Full Text Available Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development—The Child Affective Facial Expression set (CAFE. The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for 6 emotional facial expressions—angry, fearful, sad, happy, surprised, and disgusted—and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  11. Misinterpretation of Facial Expressions of Emotion in Verbal Adults with Autism Spectrum Disorder

    OpenAIRE

    Eack, Shaun M.; MAZEFSKY, CARLA A.; Minshew, Nancy J.

    2014-01-01

    Facial emotion perception is significantly affected in autism spectrum disorder (ASD), yet little is known about how individuals with ASD misinterpret facial expressions that result in their difficulty in accurately recognizing emotion in faces. This study examined facial emotion perception in 45 verbal adults with ASD and 30 age- and gender-matched volunteers without ASD to identify patterns of emotion misinterpretation during face processing that contribute to emotion recognition impairment...

  12. The amygdalo-motor pathway and the control of facial expressions

    Directory of Open Access Journals (Sweden)

    Katalin M Gothard

    2014-03-01

    Full Text Available Facial expressions reflect decisions about the perceived meaning of social stimuli emitted by others and the expected socio-emotional outcome of the reciprocating expression. The decision to produce a facial expression emerges from the joint activity of a network of structures that include the amygdala and multiple, interconnected cortical and subcortical motor areas. Reciprocal transformations between sensory and motor signals give rise to distinct brain states that promote, or impede the production of facial expressions. The muscles of the upper and lower face are controlled by anatomically distinct motor areas and thus require distinct patterns of motor commands. Concomitantly multiple areas, including the amygdala, monitor the ongoing overt behavior (the expression of self and the covert, autonomic responses that accompany emotional expressions. Interoceptive signals and visceral states, therefore, should be incorporated into the formalisms of decision making in order account for decisions that govern the receiving-emitting cycle of facial expressions.

  13. Effect of facial expressions on student's comprehension recognition in virtual educational environments.

    Science.gov (United States)

    Sathik, Mohamed; Jonathan, Sofia G

    2013-01-01

    The scope of this research is to examine whether facial expression of the students is a tool for the lecturer to interpret comprehension level of students in virtual classroom and also to identify the impact of facial expressions during lecture and the level of comprehension shown by these expressions. Our goal is to identify physical behaviours of the face that are linked to emotional states, and then to identify how these emotional states are linked to student's comprehension. In this work, the effectiveness of a student's facial expressions in non-verbal communication in a virtual pedagogical environment was investigated first. Next, the specific elements of learner's behaviour for the different emotional states and the relevant facial expressions signaled by the action units were interpreted. Finally, it focused on finding the impact of the relevant facial expression on the student's comprehension. Experimentation was done through survey, which involves quantitative observations of the lecturers in the classroom in which the behaviours of students were recorded and statistically analyzed. The result shows that facial expression is the most frequently used nonverbal communication mode by the students in the virtual classroom and facial expressions of the students are significantly correlated to their emotions which helps to recognize their comprehension towards the lecture.

  14. Realistic Facial Expression of Virtual Human Based on Color, Sweat, and Tears Effects

    Directory of Open Access Journals (Sweden)

    Mohammed Hazim Alkawaz

    2014-01-01

    Full Text Available Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry and blushing (anger and happiness is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.

  15. Realistic facial expression of virtual human based on color, sweat, and tears effects.

    Science.gov (United States)

    Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan

    2014-01-01

    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics.

  16. Realistic Facial Expression of Virtual Human Based on Color, Sweat, and Tears Effects

    Science.gov (United States)

    Alkawaz, Mohammed Hazim; Basori, Ahmad Hoirul; Mohamad, Dzulkifli; Mohamed, Farhan

    2014-01-01

    Generating extreme appearances such as scared awaiting sweating while happy fit for tears (cry) and blushing (anger and happiness) is the key issue in achieving the high quality facial animation. The effects of sweat, tears, and colors are integrated into a single animation model to create realistic facial expressions of 3D avatar. The physical properties of muscles, emotions, or the fluid properties with sweating and tears initiators are incorporated. The action units (AUs) of facial action coding system are merged with autonomous AUs to create expressions including sadness, anger with blushing, happiness with blushing, and fear. Fluid effects such as sweat and tears are simulated using the particle system and smoothed-particle hydrodynamics (SPH) methods which are combined with facial animation technique to produce complex facial expressions. The effects of oxygenation of the facial skin color appearance are measured using the pulse oximeter system and the 3D skin analyzer. The result shows that virtual human facial expression is enhanced by mimicking actual sweating and tears simulations for all extreme expressions. The proposed method has contribution towards the development of facial animation industry and game as well as computer graphics. PMID:25136663

  17. Facial identity and emotional expression as predictors during economic decisions.

    Science.gov (United States)

    Alguacil, Sonia; Madrid, Eduardo; Espín, Antonio M; Ruz, María

    2017-04-01

    Two sources of information most relevant to guide social decision making are the cooperative tendencies associated with different people and their facial emotional displays. This electrophysiological experiment aimed to study how the use of personal identity and emotional expressions as cues impacts different stages of face processing and their potential isolated or interactive processing. Participants played a modified trust game with 8 different alleged partners, and in separate blocks either the identity or the emotions carried information regarding potential trial outcomes (win or loss). Behaviorally, participants were faster to make decisions based on identity compared to emotional expressions. Also, ignored (nonpredictive) emotions interfered with decisions based on identity in trials where these sources of information conflicted. Electrophysiological results showed that expectations based on emotions modulated processing earlier in time than those based on identity. Whereas emotion modulated the central N1 and VPP potentials, identity judgments heightened the amplitude of the N2 and P3b. In addition, the conflict that ignored emotions generated was reflected on the N170 and P3b potentials. Overall, our results indicate that using identity or emotional cues to predict cooperation tendencies recruits dissociable neural circuits from an early point in time, and that both sources of information generate early and late interactive patterns.

  18. [Emotional intelligence and oscillatory responses on the emotional facial expressions].

    Science.gov (United States)

    Kniazev, G G; Mitrofanova, L G; Bocharov, A V

    2013-01-01

    Emotional intelligence-related differences in oscillatory responses to emotional facial expressions were investigated in 48 subjects (26 men and 22 women) in age 18-30 years. Participants were instructed to evaluate emotional expression (angry, happy and neutral) of each presented face on an analog scale ranging from -100 (very hostile) to + 100 (very friendly). High emotional intelligence (EI) participants were found to be more sensitive to the emotional content of the stimuli. It showed up both in their subjective evaluation of the stimuli and in a stronger EEG theta synchronization at an earlier (between 100 and 500 ms after face presentation) processing stage. Source localization using sLORETA showed that this effect was localized in the fusiform gyrus upon the presentation of angry faces and in the posterior cingulate gyrus upon the presentation of happy faces. At a later processing stage (500-870 ms) event-related theta synchronization in high emotional intelligence subject was higher in the left prefrontal cortex upon the presentation of happy faces, but it was lower in the anterior cingulate cortex upon presentation of angry faces. This suggests the existence of a mechanism that can be selectively increase the positive emotions and reduce negative emotions.

  19. Dissociable roles of internal feelings and face recognition ability in facial expression decoding.

    Science.gov (United States)

    Zhang, Lin; Song, Yiying; Liu, Ling; Liu, Jia

    2016-05-15

    The problem of emotion recognition has been tackled by researchers in both affective computing and cognitive neuroscience. While affective computing relies on analyzing visual features from facial expressions, it has been proposed that humans recognize emotions by internally simulating the emotional states conveyed by others' expressions, in addition to perceptual analysis of facial features. Here we investigated whether and how our internal feelings contributed to the ability to decode facial expressions. In two independent large samples of participants, we observed that individuals who generally experienced richer internal feelings exhibited a higher ability to decode facial expressions, and the contribution of internal feelings was independent of face recognition ability. Further, using voxel-based morphometry, we found that the gray matter volume (GMV) of bilateral superior temporal sulcus (STS) and the right inferior parietal lobule was associated with facial expression decoding through the mediating effect of internal feelings, while the GMV of bilateral STS, precuneus, and the right central opercular cortex contributed to facial expression decoding through the mediating effect of face recognition ability. In addition, the clusters in bilateral STS involved in the two components were neighboring yet separate. Our results may provide clues about the mechanism by which internal feelings, in addition to face recognition ability, serve as an important instrument for humans in facial expression decoding. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. [Association between intelligence development and facial expression recognition ability in children with autism spectrum disorder].

    Science.gov (United States)

    Pan, Ning; Wu, Gui-Hua; Zhang, Ling; Zhao, Ya-Fen; Guan, Han; Xu, Cai-Juan; Jing, Jin; Jin, Yu

    2017-03-01

    To investigate the features of intelligence development, facial expression recognition ability, and the association between them in children with autism spectrum disorder (ASD). A total of 27 ASD children aged 6-16 years (ASD group, full intelligence quotient >70) and age- and gender-matched normally developed children (control group) were enrolled. Wechsler Intelligence Scale for Children Fourth Edition and Chinese Static Facial Expression Photos were used for intelligence evaluation and facial expression recognition test. Compared with the control group, the ASD group had significantly lower scores of full intelligence quotient, verbal comprehension index, perceptual reasoning index (PRI), processing speed index(PSI), and working memory index (WMI) (Pintelligence development compared with normally developed children and impaired expression recognition ability. Perceptual reasoning and working memory abilities are positively correlated with expression recognition ability, which suggests that insufficient perceptual reasoning and working memory abilities may be important factors affecting facial expression recognition ability in ASD children.

  1. Adaptive Face Coding Contributes to Individual Differences in Facial Expression Recognition Independently of Affective Factors.

    Science.gov (United States)

    Palermo, Romina; Jeffery, Linda; Lewandowsky, Jessica; Fiorentini, Chiara; Irons, Jessica L; Dawel, Amy; Burton, Nichola; McKone, Elinor; Rhodes, Gillian

    2017-08-21

    There are large, reliable individual differences in the recognition of facial expressions of emotion across the general population. The sources of this variation are not yet known. We investigated the contribution of a key face perception mechanism, adaptive coding, which calibrates perception to optimize discrimination within the current perceptual "diet." We expected that a facial expression system that readily recalibrates might boost sensitivity to variation among facial expressions, thereby enhancing recognition ability. We measured adaptive coding strength with an established facial expression aftereffect task and measured facial expression recognition ability with 3 tasks optimized for the assessment of individual differences. As expected, expression recognition ability was positively associated with the strength of facial expression aftereffects. We also asked whether individual variation in affective factors might contribute to expression recognition ability, given that clinical levels of such traits have previously been linked to ability. Expression recognition ability was negatively associated with self-reported anxiety but not with depression, mood, or degree of autism-like or empathetic traits. Finally, we showed that the perceptual factor of adaptive coding contributes to variation in expression recognition ability independently of affective factors. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Rules versus Prototype Matching: Strategies of Perception of Emotional Facial Expressions in the Autism Spectrum

    Science.gov (United States)

    Rutherford, M. D.; McIntosh, Daniel N.

    2007-01-01

    When perceiving emotional facial expressions, people with autistic spectrum disorders (ASD) appear to focus on individual facial features rather than configurations. This paper tests whether individuals with ASD use these features in a rule-based strategy of emotional perception, rather than a typical, template-based strategy by considering…

  3. Multi-output Laplacian Dynamic Ordinal Regression for Facial Expression Recognition and Intensity Estimation

    NARCIS (Netherlands)

    Rudovic, Ognjen; Pavlovic, Vladimir; Pantic, Maja

    2012-01-01

    Automated facial expression recognition has received increased attention over the past two decades. Existing works in the field usually do not encode either the temporal evolution or the intensity of the observed facial displays. They also fail to jointly model multidimensional (multi-class)

  4. Recognition of Facially Expressed Emotions and Visual Search Strategies in Adults with Asperger Syndrome

    Science.gov (United States)

    Falkmer, Marita; Bjallmark, Anna; Larsson, Matilda; Falkmer, Torbjorn

    2011-01-01

    Can the disadvantages persons with Asperger syndrome frequently experience with reading facially expressed emotions be attributed to a different visual perception, affecting their scanning patterns? Visual search strategies, particularly regarding the importance of information from the eye area, and the ability to recognise facially expressed…

  5. 3D Face Model Dataset: Automatic Detection of Facial Expressions and Emotions for Educational Environments

    Science.gov (United States)

    Chickerur, Satyadhyan; Joshi, Kartik

    2015-01-01

    Emotion detection using facial images is a technique that researchers have been using for the last two decades to try to analyze a person's emotional state given his/her image. Detection of various kinds of emotion using facial expressions of students in educational environment is useful in providing insight into the effectiveness of tutoring…

  6. Facial Pain Expression in Dementia : A Review of the Experimental and Clinical Evidence

    NARCIS (Netherlands)

    Lautenbacher, Stefan; Kunz, Miriam

    2017-01-01

    The analysis of the facial expression of pain promises to be one of the most sensitive tools for the detection of pain in patients with moderate to severe forms of dementia, who can no longer self-report pain. Fine-grain analysis using the Facial Action Coding System (FACS) is possible in research

  7. The relation of expression recognition and affective experience in facial expression processing: an event-related potential study

    Directory of Open Access Journals (Sweden)

    Guangheng Dong

    2010-04-01

    Full Text Available Guangheng Dong1, Shenglan Lu21Department of Psychology, 2Department of International Education, Zhejiang Normal University, Jinhua, ChinaAbstract: The present study investigates the relationship of expression recognition and affective experience during facial expression processing by event-related potentials (ERP. Facial expressions used in the present study can be divided into three categories: positive (happy, neutral (neutral, and negative (angry. Participants were asked to finish two kinds of facial recognition tasks: one was easy, and the other was difficult. In the easy task, significant main effects were found for different valence conditions, meaning that emotions were evoked effectively when participants recognized the expressions in facial expression processing. However, no difference was found in the difficult task, meaning that even if participants had identified the expressions correctly, no relevant emotion was evoked during the process. The findings suggest that emotional experience was not simultaneous with expression identification in facial expression processing, and the affective experience process could be suppressed in challenging cognitive tasks. The results indicate that we should pay attention to the level of cognitive load when using facial expressions as emotion-eliciting materials in emotion studies; otherwise, the emotion may not be evoked effectively.Keywords: affective experience, expression recognition, cognitive load, event-related potential

  8. A wearable device for emotional recognition using facial expression and physiological response.

    Science.gov (United States)

    Jangho Kwon; Da-Hye Kim; Wanjoo Park; Laehyun Kim

    2016-08-01

    This paper introduces a glasses-typed wearable system to detect user's emotions using facial expression and physiological responses. The system is designed to acquire facial expression through a built-in camera and physiological responses such as photoplethysmogram (PPG) and electrodermal activity (EDA) in unobtrusive way. We used video clips for induced emotions to test the system suitability in the experiment. The results showed a few meaningful properties that associate emotions with facial expressions and physiological responses captured by the developed wearable device. We expect that this wearable system with a built-in camera and physiological sensors may be a good solution to monitor user's emotional state in daily life.

  9. Effect of monochromatic light on circadian rhythmic expression of clock genes and arylalkylamine N-acetyltransferase in chick retina.

    Science.gov (United States)

    Cao, Jing; Bian, Jiang; Wang, Zixu; Dong, Yulan; Chen, Yaoxing

    2017-01-01

    Birds have more developed visual function. They not only have the ability to detect light and darkness but also have the color vision. Previous study showed that monochromatic light influenced avian physiological processes, which were controlled by clock genes. Therefore, bird's eye is a good model to studying the impact of color of light on circadian rhythms. Avian retina is one of the most important central oscillations. The study was designed to investigate the effect of color of light on the expression of clock genes and arylalkylamine N-acetyltransferase (Aanat) mRNA expression in chick retina. A total of 240 post-hatching day (P) 0 broiler chickens were exposed to blue (BL), green (GL), red (RL) and white light (WL) from a LED system under a light-dark cycle 12L:12D for 14 d. The results show that the significant daily variations existed in the gene expression of cBmal1, cBmal2, cCry1, cCry2, cPer2 and cPer3, but not for cClock under four light treatments. The genes cBmal1, cCry1, cPer2 and cPer3 presented circadian rhythmic expression under the various monochromatic lights. When compared with WL, GL elevated the expression of positive regulators of cellular clock (cBmal1, cBmal2 and cClock) and cAanat mRNA level, whereas RL increased the mRNA levels of negative regulators of cellular clock (cCry1, cCry2, cPer2 and cPer3) and decreased the cAanat mRNA expression in the retina. These results demonstrated that monochromatic light affect the periodic expression levels of the biological clock mRNA by positive and negative feedback loop interactions, GL activated the transcription of cAanat; while RL suppressed the transcription of cAanat. Thereby, color of light regulates ocular cAanat expression by affecting on expression of cellular clock regulators.

  10. Effects of cultural characteristics on building an emotion classifier through facial expression analysis

    Science.gov (United States)

    da Silva, Flávio Altinier Maximiano; Pedrini, Helio

    2015-03-01

    Facial expressions are an important demonstration of humanity's humors and emotions. Algorithms capable of recognizing facial expressions and associating them with emotions were developed and employed to compare the expressions that different cultural groups use to show their emotions. Static pictures of predominantly occidental and oriental subjects from public datasets were used to train machine learning algorithms, whereas local binary patterns, histogram of oriented gradients (HOGs), and Gabor filters were employed to describe the facial expressions for six different basic emotions. The most consistent combination, formed by the association of HOG filter and support vector machines, was then used to classify the other cultural group: there was a strong drop in accuracy, meaning that the subtle differences of facial expressions of each culture affected the classifier performance. Finally, a classifier was trained with images from both occidental and oriental subjects and its accuracy was higher on multicultural data, evidencing the need of a multicultural training set to build an efficient classifier.

  11. Processing of individual items during ensemble coding of facial expressions

    Directory of Open Access Journals (Sweden)

    Huiyun Li

    2016-09-01

    Full Text Available There is growing evidence that human observers are able to extract the mean emotion or other type of information from a set of faces. The most intriguing aspect of this phenomenon is that observers often fail to identify or form a representation for individual faces in a face set. However, most of these results were based on judgments under limited processing resource. We examined a wider range of exposure time and observed how the relationship between the extraction of a mean and representation of individual facial expressions would change. The results showed that with an exposure time of 50 milliseconds for the faces, observers were more sensitive to mean representation over individual representation, replicating the typical findings in the literature. With longer exposure time, however, observers were able to extract both individual and mean representation more accurately. Furthermore, diffusion model analysis revealed that the mean representation is also more prone to suffer from the noise accumulated in redundant processing time and leads to a more conservative decision bias, whereas individual representations seem more resistant to this noise. Results suggest that the encoding of emotional information from multiple faces may take two forms: single face processing and crowd face processing.

  12. A Modified Sparse Representation Method for Facial Expression Recognition

    Directory of Open Access Journals (Sweden)

    Wei Wang

    2016-01-01

    Full Text Available In this paper, we carry on research on a facial expression recognition method, which is based on modified sparse representation recognition (MSRR method. On the first stage, we use Haar-like+LPP to extract feature and reduce dimension. On the second stage, we adopt LC-K-SVD (Label Consistent K-SVD method to train the dictionary, instead of adopting directly the dictionary from samples, and add block dictionary training into the training process. On the third stage, stOMP (stagewise orthogonal matching pursuit method is used to speed up the convergence of OMP (orthogonal matching pursuit. Besides, a dynamic regularization factor is added to iteration process to suppress noises and enhance accuracy. We verify the proposed method from the aspect of training samples, dimension, feature extraction and dimension reduction methods and noises in self-built database and Japan’s JAFFE and CMU’s CK database. Further, we compare this sparse method with classic SVM and RVM and analyze the recognition effect and time efficiency. The result of simulation experiment has shown that the coefficient of MSRR method contains classifying information, which is capable of improving the computing speed and achieving a satisfying recognition result.

  13. Altered saccadic targets when processing facial expressions under different attentional and stimulus conditions.

    Science.gov (United States)

    Boutsen, Frank A; Dvorak, Justin D; Pulusu, Vinay K; Ross, Elliott D

    2017-04-01

    Depending on a subject's attentional bias, robust changes in emotional perception occur when facial blends (different emotions expressed on upper/lower face) are presented tachistoscopically. If no instructions are given, subjects overwhelmingly identify the lower facial expression when blends are presented to either visual field. If asked to attend to the upper face, subjects overwhelmingly identify the upper facial expression in the left visual field but remain slightly biased to the lower facial expression in the right visual field. The current investigation sought to determine whether differences in initial saccadic targets could help explain the perceptual biases described above. Ten subjects were presented with full and blend facial expressions under different attentional conditions. No saccadic differences were found for left versus right visual field presentations or for full facial versus blend stimuli. When asked to identify the presented emotion, saccades were directed to the lower face. When asked to attend to the upper face, saccades were directed to the upper face. When asked to attend to the upper face and try to identify the emotion, saccades were directed to the upper face but to a lesser degree. Thus, saccadic behavior supports the concept that there are cognitive-attentional pre-attunements when subjects visually process facial expressions. However, these pre-attunements do not fully explain the perceptual superiority of the left visual field for identifying the upper facial expression when facial blends are presented tachistoscopically. Hence other perceptual factors must be in play, such as the phenomenon of virtual scanning. Published by Elsevier Ltd.

  14. Experimental results of affective valence and arousal to avatar's facial expressions.

    Science.gov (United States)

    Ku, Jeonghun; Jang, Hee Jeong; Kim, Kwang Uk; Kim, Jae Hun; Park, Sung Hyouk; Lee, Jang Han; Kim, Jae Jin; Kim, In Y; Kim, Sun I

    2005-10-01

    The objectives of this study were to propose a method of presenting dynamic facial expressions to experimental subjects, in order to investigate human perception of avatar's facial expressions of different levels of emotional intensity. The investigation concerned how perception varies according to the strength of facial expression, as well as according to an avatar's gender. To accomplish these goals, we generated a male and a female virtual avatar with five levels of intensity of happiness and anger using a morphing technique. We then recruited 16 normal healthy subjects and measured each subject's emotional reaction by scoring affective arousal and valence after showing them the avatar's face. Through this study, we were able to investigate human perceptual characteristics evoked by male and female avatars' graduated facial expressions of happiness and anger. In addition, we were able to identify that a virtual avatar's facial expression could affect human emotion in different ways according to the avatar's gender and the intensity of its facial expressions. However, we could also see that virtual faces have some limitations because they are not real, so subjects recognized the expressions well, but were not influenced to the same extent. Although a virtual avatar has some limitations in conveying its emotion using facial expressions, this study is significant in that it shows that a new potential exists to use or manipulate emotional intensity by controlling a virtual avatar's facial expression linearly using a morphing technique. Therefore, it is predicted that this technique may be used for assessing emotional characteristics of humans, and may be of particular benefit for work with people with emotional disorders through a presentation of dynamic expression of various emotional intensities.

  15. Discriminability effect on Garner interference: evidence from recognition of facial identity and expression

    Directory of Open Access Journals (Sweden)

    Yamin eWang

    2013-12-01

    Full Text Available Using Garner’s speeded classification task existing studies demonstrated an asymmetric interference in the recognition of facial identity and facial expression. It seems that expression is hard to interfere with identity recognition. However, discriminability of identity and expression, a potential confounding variable, had not been carefully examined in existing studies. In current work, we manipulated discriminability of identity and expression by matching facial shape (long or round in identity and matching mouth (opened or closed in facial expression. Garner interference was found either from identity to expression (Experiment 1 or from expression to identity (Experiment 2. Interference was also found in both directions (Experiment 3 or in neither direction (Experiment 4. The results support that Garner interference tends to occur under condition of low discriminability of relevant dimension regardless of facial property. Our findings indicate that Garner interference is not necessarily related to interdependent processing in recognition of facial identity and expression. The findings also suggest that discriminability as a mediating factor should be carefully controlled in future research.

  16. Anodal tDCS targeting the right orbitofrontal cortex enhances facial expression recognition.

    Science.gov (United States)

    Willis, Megan L; Murphy, Jillian M; Ridley, Nicole J; Vercammen, Ans

    2015-12-01

    The orbitofrontal cortex (OFC) has been implicated in the capacity to accurately recognise facial expressions. The aim of the current study was to determine if anodal transcranial direct current stimulation (tDCS) targeting the right OFC in healthy adults would enhance facial expression recognition, compared with a sham condition. Across two counterbalanced sessions of tDCS (i.e. anodal and sham), 20 undergraduate participants (18 female) completed a facial expression labelling task comprising angry, disgusted, fearful, happy, sad and neutral expressions, and a control (social judgement) task comprising the same expressions. Responses on the labelling task were scored for accuracy, median reaction time and overall efficiency (i.e. combined accuracy and reaction time). Anodal tDCS targeting the right OFC enhanced facial expression recognition, reflected in greater efficiency and speed of recognition across emotions, relative to the sham condition. In contrast, there was no effect of tDCS to responses on the control task. This is the first study to demonstrate that anodal tDCS targeting the right OFC boosts facial expression recognition. This finding provides a solid foundation for future research to examine the efficacy of this technique as a means to treat facial expression recognition deficits, particularly in individuals with OFC damage or dysfunction. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  17. Cultural similarities and differences in perceiving and recognizing facial expressions of basic emotions.

    Science.gov (United States)

    Yan, Xiaoqian; Andrews, Timothy J; Young, Andrew W

    2016-03-01

    The ability to recognize facial expressions of basic emotions is often considered a universal human ability. However, recent studies have suggested that this commonality has been overestimated and that people from different cultures use different facial signals to represent expressions (Jack, Blais, Scheepers, Schyns, & Caldara, 2009; Jack, Caldara, & Schyns, 2012). We investigated this possibility by examining similarities and differences in the perception and categorization of facial expressions between Chinese and white British participants using whole-face and partial-face images. Our results showed no cultural difference in the patterns of perceptual similarity of expressions from whole-face images. When categorizing the same expressions, however, both British and Chinese participants were slightly more accurate with whole-face images of their own ethnic group. To further investigate potential strategy differences, we repeated the perceptual similarity and categorization tasks with presentation of only the upper or lower half of each face. Again, the perceptual similarity of facial expressions was similar between Chinese and British participants for both the upper and lower face regions. However, participants were slightly better at categorizing facial expressions of their own ethnic group for the lower face regions, indicating that the way in which culture shapes the categorization of facial expressions is largely driven by differences in information decoding from this part of the face. (c) 2016 APA, all rights reserved).

  18. Towards a Unified Framework for Pose, Expression, and Occlusion Tolerant Automatic Facial Alignment.

    Science.gov (United States)

    Seshadri, Keshav; Savvides, Marios

    2016-10-01

    We propose a facial alignment algorithm that is able to jointly deal with the presence of facial pose variation, partial occlusion of the face, and varying illumination and expressions. Our approach proceeds from sparse to dense landmarking steps using a set of specific models trained to best account for the shape and texture variation manifested by facial landmarks and facial shapes across pose and various expressions. We also propose the use of a novel l1-regularized least squares approach that we incorporate into our shape model, which is an improvement over the shape model used by several prior Active Shape Model (ASM) based facial landmark localization algorithms. Our approach is compared against several state-of-the-art methods on many challenging test datasets and exhibits a higher fitting accuracy on all of them.

  19. Describing Facial Expressions : much more than meets the eye

    OpenAIRE

    Vercauteren, Gert; Orero, Pilar

    2013-01-01

    El artículo analiza las emociones y su representación como expresiones faciales, y las diferentes formas en que pueden ser audiodescritas. Las expresiones faciales en el cine y otros medios visuales constituyen un problema muy complejo, como se explica en la primera parte del artículo. Dado el carácter implícito de la comunicación visual, puede que no siempre sea posible para el audiodescriptor determinar sin ambigüedades las emociones que se deben describir. Las expresiones faciales que se o...

  20. Processing facial expressions of emotion: upright vs. inverted images

    Directory of Open Access Journals (Sweden)

    David eBimler

    2013-02-01

    Full Text Available We studied discrimination of briefly presented Upright vs. Inverted emotional facial expressions (FEs, hypothesising that inversion would impair emotion decoding by disrupting holistic FE processing. Stimuli were photographs of seven emotion prototypes, of a male and female poser (Ekman and Friesen, 1976, and eight intermediate morphs in each set. Subjects made speeded Same/Different judgements of emotional content for all Upright (U or Inverted (I pairs of FEs, presented for 500 ms, 100 times each pair. Signal Detection Theory revealed the sensitivity measure d' to be slightly but significantly higher for the Upright FEs. In further analysis using multidimensional scaling (MDS, percentages of Same judgements were taken as an index of pairwise perceptual similarity, separately for U and I presentation mode. The outcome was a 4D ‘emotion expression space’, with FEs represented as points and the dimensions identified as Happy–Sad, Surprise/Fear, Disgust and Anger. The solutions for U and I FEs were compared by means of cophenetic and canonical correlation, Procrustes analysis and weighted-Euclidean analysis of individual difference. Differences in discrimination produced by inverting FE stimuli were found to be small and manifested as minor changes in the MDS structure or weights of the dimensions. Solutions differed substantially more between the two posers, however. Notably, for stimuli containing elements of Happiness (whether U or I, the MDS structure revealed some signs of categorical perception, indicating that mouth curvature – the dominant feature conveying Happiness – is visually salient and receives early processing. The findings suggest that for briefly-presented FEs, Same/Different decisions are dominated by low-level visual analysis of abstract patterns of lightness and edge filters, but also reflect emerging featural analysis. These analyses, insensitive to face orientation, enable initial positive/negative Valence

  1. Functional integration of the posterior superior temporal sulcus correlates with facial expression recognition.

    Science.gov (United States)

    Wang, Xu; Song, Yiying; Zhen, Zonglei; Liu, Jia

    2016-05-01

    Face perception is essential for daily and social activities. Neuroimaging studies have revealed a distributed face network (FN) consisting of multiple regions that exhibit preferential responses to invariant or changeable facial information. However, our understanding about how these regions work collaboratively to facilitate facial information processing is limited. Here, we focused on changeable facial information processing, and investigated how the functional integration of the FN is related to the performance of facial expression recognition. To do so, we first defined the FN as voxels that responded more strongly to faces than objects, and then used a voxel-based global brain connectivity method based on resting-state fMRI to characterize the within-network connectivity (WNC) of each voxel in the FN. By relating the WNC and performance in the "Reading the Mind in the Eyes" Test across participants, we found that individuals with stronger WNC in the right posterior superior temporal sulcus (rpSTS) were better at recognizing facial expressions. Further, the resting-state functional connectivity (FC) between the rpSTS and right occipital face area (rOFA), early visual cortex (EVC), and bilateral STS were positively correlated with the ability of facial expression recognition, and the FCs of EVC-pSTS and OFA-pSTS contributed independently to facial expression recognition. In short, our study highlights the behavioral significance of intrinsic functional integration of the FN in facial expression processing, and provides evidence for the hub-like role of the rpSTS for facial expression recognition. Hum Brain Mapp 37:1930-1940, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. Impaired Attribution of Emotion to Facial Expressions in Anxiety and Major Depression

    National Research Council Canada - National Science Library

    Demenescu, Liliana R; Kortekaas, Rudie; den Boer, Johan A; Aleman, Ane

    2010-01-01

    .... In major depression, a significant emotion recognition impairment has been reported. It remains unclear whether the ability to recognize emotion from facial expressions is also impaired in anxiety disorders...

  3. Enhanced perceptual, emotional, and motor processing in response to dynamic facial expressions of emotion1

    National Research Council Canada - National Science Library

    YOSHIKAWA, SAKIKO; SATO, WATARU

    2006-01-01

    .... The results revealed that the broad region of visual cortices, the amygdala, and the right inferior frontal gyrus were more activated in response to dynamic facial expressions than control stimuli...

  4. Enhanced perceptual, emotional, and motor processing in response to dynamic facial expressions of emotion

    National Research Council Canada - National Science Library

    Yoshikawa, Sakiko; Sato, Wataru

    2006-01-01

    .... The results revealed that the broad region of visual cortices, the amygdala, and the right inferior frontal gyrus were more activated in response to dynamic facial expressions than control stimuli...

  5. Face-selective regions differ in their ability to classify facial expressions

    Science.gov (United States)

    Zhang, Hui; Japee, Shruti; Nolan, Rachel; Chu, Carlton; Liu, Ning; Ungerleider, Leslie G

    2016-01-01

    Recognition of facial expressions is crucial for effective social interactions. Yet, the extent to which the various face-selective regions in the human brain classify different facial expressions remains unclear. We used functional magnetic resonance imaging (fMRI) and support vector machine pattern classification analysis to determine how well face-selective brain regions are able to decode different categories of facial expression. Subjects participated in a slow event-related fMRI experiment in which they were shown 32 face pictures, portraying four different expressions: neutral, fearful, angry, and happy and belonging to eight different identities. Our results showed that only the amygdala and the posterior superior temporal sulcus (STS) were able to accurately discriminate between these expressions, albeit in different ways: The amygdala discriminated fearful faces from non-fearful faces, whereas STS discriminated neutral from emotional (fearful, angry and happy) faces. In contrast to these findings on the classification of emotional expression, only the fusiform face area (FFA) and anterior inferior temporal cortex (aIT) could discriminate among the various facial identities. Further, the amygdala and STS were better than FFA and aIT at classifying expression, while FFA and aIT were better than the amygdala and STS at classifying identity. Taken together, our findings indicate that the decoding of facial emotion and facial identity occurs in different neural substrates: the amygdala and STS for the former and FFA and aIT for the latter. PMID:26826513

  6. Recognition of facial emotional expressions and its correlation with cognitive abilities in children with Down syndrome

    OpenAIRE

    Santana, Carla C. V. P. de; Souza, Wânia C. de; Feitosa, M. Angela G.

    2014-01-01

    Down syndrome (DS) is one of the most common chromosomal abnormalities. Delays in cognitive development are found in the first years of life. As years pass, it may turn into intellectual deficiencies that unfold into several aspects, including difficulty recognizing emotional facial expressions. The present study investigated the recognition of six universal facial emotional expressions in a population of children aged 6-11 years who were divided into two groups: DS group and typically develo...

  7. Memory for facial expression is influenced by the background music playing during study

    OpenAIRE

    Woloszyn, Michael R.; Ewert, Laura

    2012-01-01

    The effect of the emotional quality of study-phase background music on subsequent recall for happy and sad facial expressions was investigated. Undergraduates (N = 48) viewed a series of line drawings depicting a happy or sad child in a variety of environments that were each accompanied by happy or sad music. Although memory for faces was very accurate, emotionally incongruent background music biased subsequent memory for facial expressions, increasing the likelihood that happy faces were rec...

  8. Are event-related potentials to dynamic facial expressions of emotion related to individual differences in the accuracy of processing facial expressions and identity?

    Science.gov (United States)

    Recio, Guillermo; Wilhelm, Oliver; Sommer, Werner; Hildebrandt, Andrea

    2017-04-01

    Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain-behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = -.51) and memory (r = -.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.

  9. Human Gut Bacteria Are Sensitive to Melatonin and Express Endogenous Circadian Rhythmicity.

    Science.gov (United States)

    Paulose, Jiffin K; Wright, John M; Patel, Akruti G; Cassone, Vincent M

    2016-01-01

    Circadian rhythms are fundamental properties of most eukaryotes, but evidence of biological clocks that drive these rhythms in prokaryotes has been restricted to Cyanobacteria. In vertebrates, the gastrointestinal system expresses circadian patterns of gene expression, motility and secretion in vivo and in vitro, and recent studies suggest that the enteric microbiome is regulated by the host's circadian clock. However, it is not clear how the host's clock regulates the microbiome. Here, we demonstrate at least one species of commensal bacterium from the human gastrointestinal system, Enterobacter aerogenes, is sensitive to the neurohormone melatonin, which is secreted into the gastrointestinal lumen, and expresses circadian patterns of swarming and motility. Melatonin specifically increases the magnitude of swarming in cultures of E. aerogenes, but not in Escherichia coli or Klebsiella pneumoniae. The swarming appears to occur daily, and transformation of E. aerogenes with a flagellar motor-protein driven lux plasmid confirms a temperature-compensated circadian rhythm of luciferase activity, which is synchronized in the presence of melatonin. Altogether, these data demonstrate a circadian clock in a non-cyanobacterial prokaryote and suggest the human circadian system may regulate its microbiome through the entrainment of bacterial clocks.

  10. Recognition of Others' and Own Facial Expressions and Production of Facial Expression : Children and Adults with Autism

    OpenAIRE

    菊池, 哲平; 古賀, 精治

    2001-01-01

    自閉症児・者における情動理解の特徴を明らかにするため、顔写真を用いた表情認知能力と表情表出能力の実験的検討を行った。自閉症児・者とその母親および彼らと接触経験のない大学生が3人一組になり、それぞれの「嬉しい」「悲しい」「怒っている」時の情動を表した顔写真をお互いに判定してもらった。統制群である健常幼児と比較したところ、主に次のような結果が認められた。1)他者である大学生や母親の顔写真に対する自閉症児・者の正答率は健常幼児と比較して低かった。2)自閉症児・者が表出した表情を他者である大学生や母親が判定した場合、健常幼児の表情に対する場合と比べ正答率が低かった。3)自閉症児・者が表出した表情を自閉症児・者自身が判定した場合、健常幼児と比べて正答率に差がみられなかった。4)自閉症児・者の表情認知には健常幼児とは異なり「嬉しい」表情の優位性が認められなかった。###The present study used facial photographs to examine the recognition and production of facial expressions in chil...

  11. Emotional conflict in facial expression processing during scene viewing: an ERP study.

    Science.gov (United States)

    Xu, Qiang; Yang, Yaping; Zhang, Entao; Qiao, Fuqiang; Lin, Wenyi; Liang, Ningjian

    2015-05-22

    Facial expressions are fundamental emotional stimuli as they convey important information in social interaction. In everyday life a face always appears in complex context. Scenes which faces are embedded in provided typical visual context. The aim of the present study was to investigate the processing of emotional conflict between facial expressions and emotional scenes by recording event-related potentials (ERPs). We found that when the scene was presented before the face-scene compound stimulus, the scene had an influence on facial expression processing. Specifically, emotionally incongruent (in conflict) face-scene compound stimuli elicited larger fronto-central N2 amplitude relative to the emotionally congruent face-scene compound stimuli. The effect occurred in the post-perceptual stage of facial expression processing and reflected emotional conflict monitoring between emotional scenes and facial expressions. The present findings emphasized the importance of emotional scenes as a context factor in the study of the processing of facial expressions. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Impaired social brain network for processing dynamic facial expressions in autism spectrum disorders

    Science.gov (United States)

    2012-01-01

    Background Impairment of social interaction via facial expressions represents a core clinical feature of autism spectrum disorders (ASD). However, the neural correlates of this dysfunction remain unidentified. Because this dysfunction is manifested in real-life situations, we hypothesized that the observation of dynamic, compared with static, facial expressions would reveal abnormal brain functioning in individuals with ASD. We presented dynamic and static facial expressions of fear and happiness to individuals with high-functioning ASD and to age- and sex-matched typically developing controls and recorded their brain activities using functional magnetic resonance imaging (fMRI). Result Regional analysis revealed reduced activation of several brain regions in the ASD group compared with controls in response to dynamic versus static facial expressions, including the middle temporal gyrus (MTG), fusiform gyrus, amygdala, medial prefrontal cortex, and inferior frontal gyrus (IFG). Dynamic causal modeling analyses revealed that bi-directional effective connectivity involving the primary visual cortex–MTG–IFG circuit was enhanced in response to dynamic as compared with static facial expressions in the control group. Group comparisons revealed that all these modulatory effects were weaker in the ASD group than in the control group. Conclusions These results suggest that weak activity and connectivity of the social brain network underlie the impairment in social interaction involving dynamic facial expressions in individuals with ASD. PMID:22889284

  13. Acute alcohol effects on facial expressions of emotions in social drinkers: a systematic review

    Science.gov (United States)

    Capito, Eva Susanne; Lautenbacher, Stefan; Horn-Hofmann, Claudia

    2017-01-01

    Background As known from everyday experience and experimental research, alcohol modulates emotions. Particularly regarding social interaction, the effects of alcohol on the facial expression of emotion might be of relevance. However, these effects have not been systematically studied. We performed a systematic review on acute alcohol effects on social drinkers’ facial expressions of induced positive and negative emotions. Materials and methods With a predefined algorithm, we searched three electronic databases (PubMed, PsycInfo, and Web of Science) for studies conducted on social drinkers that used acute alcohol administration, emotion induction, and standardized methods to record facial expressions. We excluded those studies that failed common quality standards, and finally selected 13 investigations for this review. Results Overall, alcohol exerted effects on facial expressions of emotions in social drinkers. These effects were not generally disinhibiting, but varied depending on the valence of emotion and on social interaction. Being consumed within social groups, alcohol mostly influenced facial expressions of emotions in a socially desirable way, thus underscoring the view of alcohol as social lubricant. However, methodical differences regarding alcohol administration between the studies complicated comparability. Conclusion Our review highlighted the relevance of emotional valence and social-context factors for acute alcohol effects on social drinkers’ facial expressions of emotions. Future research should investigate how these alcohol effects influence the development of problematic drinking behavior in social drinkers. PMID:29255375

  14. A Web-based Game for Teaching Facial Expressions to Schizophrenic Patients.

    Science.gov (United States)

    Gülkesen, Kemal Hakan; Isleyen, Filiz; Cinemre, Buket; Samur, Mehmet Kemal; Sen Kaya, Semiha; Zayim, Nese

    2017-07-12

    Recognizing facial expressions is an important social skill. In some psychological disorders such as schizophrenia, loss of this skill may complicate the patient's daily life. Prior research has shown that information technology may help to develop facial expression recognition skills through educational software and games. To examine if a computer game designed for teaching facial expressions would improve facial expression recognition skills of patients with schizophrenia. We developed a website composed of eight serious games. Thirty-two patients were given a pre-test composed of 21 facial expression photographs. Eighteen patients were in the study group while 14 were in the control group. Patients in the study group were asked to play the games on the website. After a period of one month, we performed a post-test for all patients. The median score of the correct answers was 17.5 in the control group whereas it was 16.5 in the study group (of 21) in pretest. The median post-test score was 18 in the control group (p=0.052) whereas it was 20 in the study group (pgames may be used for the purpose of educating people who have difficulty in recognizing facial expressions.

  15. Discrimination of emotional facial expressions by tufted capuchin monkeys (Sapajus apella).

    Science.gov (United States)

    Calcutt, Sarah E; Rubin, Taylor L; Pokorny, Jennifer J; de Waal, Frans B M

    2017-02-01

    Tufted or brown capuchin monkeys (Sapajus apella) have been shown to recognize conspecific faces as well as categorize them according to group membership. Little is known, though, about their capacity to differentiate between emotionally charged facial expressions or whether facial expressions are processed as a collection of features or configurally (i.e., as a whole). In 3 experiments, we examined whether tufted capuchins (a) differentiate photographs of neutral faces from either affiliative or agonistic expressions, (b) use relevant facial features to make such choices or view the expression as a whole, and (c) demonstrate an inversion effect for facial expressions suggestive of configural processing. Using an oddity paradigm presented on a computer touchscreen, we collected data from 9 adult and subadult monkeys. Subjects discriminated between emotional and neutral expressions with an exceptionally high success rate, including differentiating open-mouth threats from neutral expressions even when the latter contained varying degrees of visible teeth and mouth opening. They also showed an inversion effect for facial expressions, results that may indicate that quickly recognizing expressions does not originate solely from feature-based processing but likely a combination of relational processes. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Dimensional Information-Theoretic Measurement of Facial Emotion Expressions in Schizophrenia

    Directory of Open Access Journals (Sweden)

    Jihun Hamm

    2014-01-01

    Full Text Available Altered facial expressions of emotions are characteristic impairments in schizophrenia. Ratings of affect have traditionally been limited to clinical rating scales and facial muscle movement analysis, which require extensive training and have limitations based on methodology and ecological validity. To improve reliable assessment of dynamic facial expression changes, we have developed automated measurements of facial emotion expressions based on information-theoretic measures of expressivity of ambiguity and distinctiveness of facial expressions. These measures were examined in matched groups of persons with schizophrenia (n=28 and healthy controls (n=26 who underwent video acquisition to assess expressivity of basic emotions (happiness, sadness, anger, fear, and disgust in evoked conditions. Persons with schizophrenia scored higher on ambiguity, the measure of conditional entropy within the expression of a single emotion, and they scored lower on distinctiveness, the measure of mutual information across expressions of different emotions. The automated measures compared favorably with observer-based ratings. This method can be applied for delineating dynamic emotional expressivity in healthy and clinical populations.

  17. Rhythmic expression of circadian clock genes in the preovulatory ovarian follicles of the laying hen.

    Directory of Open Access Journals (Sweden)

    Zhichao Zhang

    Full Text Available The circadian clock is reported to play a role in the ovaries in a variety of vertebrate species, including the domestic hen. However, the ovary is an organ that changes daily, and the laying hen maintains a strict follicular hierarchy. The aim of this study was to examine the spatial-temporal expression of several known canonical clock genes in the granulosa and theca layers of six hierarchy follicles. We demonstrated that the granulosa cells (GCs of the F1-F3 follicles harbored intrinsic oscillatory mechanisms in vivo. In addition, cultured granulosa cells (GCs from F1 follicles exposed to luteinizing hormone (LH synchronization displayed Per2 mRNA oscillations, whereas, the less mature GCs (F5 plus F6 displayed no circadian change in Per2 mRNA levels. Cultures containing follicle-stimulating hormone (FSH combined with LH expressed levels of Per2 mRNA that were 2.5-fold higher than those in cultures with LH or FSH alone. These results show that there is spatial specificity in the localization of clock cells in hen preovulatory follicles. In addition, our results support the hypothesis that gonadotropins provide a cue for the development of the functional cellular clock in immature GCs.

  18. Differential expression of wound fibrotic factors between facial and trunk dermal fibroblasts.

    Science.gov (United States)

    Kurita, Masakazu; Okazaki, Mutsumi; Kaminishi-Tanikawa, Akiko; Niikura, Mamoru; Takushima, Akihiko; Harii, Kiyonori

    2012-01-01

    Clinically, wounds on the face tend to heal with less scarring than those on the trunk, but the causes of this difference have not been clarified. Fibroblasts obtained from different parts of the body are known to show different properties. To investigate whether the characteristic properties of facial and trunk wound healing are caused by differences in local fibroblasts, we comparatively analyzed the functional properties of superficial and deep dermal fibroblasts obtained from the facial and trunk skin of seven individuals, with an emphasis on tendency for fibrosis. Proliferation kinetics and mRNA and protein expression of 11 fibrosis-associated factors were investigated. The proliferation kinetics of facial and trunk fibroblasts were identical, but the expression and production levels of profibrotic factors, such as extracellular matrix, transforming growth factor-β1, and connective tissue growth factor mRNA, were lower in facial fibroblasts when compared with trunk fibroblasts, while the expression of antifibrotic factors, such as collagenase, basic fibroblast growth factor, and hepatocyte growth factor, showed no clear trends. The differences in functional properties of facial and trunk dermal fibroblasts were consistent with the clinical tendencies of healing of facial and trunk wounds. Thus, the differences between facial and trunk scarring are at least partly related to the intrinsic nature of the local dermal fibroblasts.

  19. Development and validation of an Argentine set of facial expressions of emotion.

    Science.gov (United States)

    Vaiman, Marcelo; Wagner, Mónica Anna; Caicedo, Estefanía; Pereno, Germán Leandro

    2017-02-01

    Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion research is receiving in this region. Here we present the development and validation of the Universidad Nacional de Cordoba, Expresiones de Emociones Faciales (UNCEEF), a Facial Action Coding System (FACS)-verified set of pictures of Argentineans expressing the six basic emotions, plus neutral expressions. FACS scores, recognition rates, Hu scores, and discrimination indices are reported. Evidence of convergent validity was obtained using the Pictures of Facial Affect in an Argentine sample. However, recognition accuracy was greater for UNCEEF. The importance of local sets of emotion pictures is discussed.

  20. Working memory and facial expression recognition in patients with Parkinson's disease.

    Science.gov (United States)

    Alonso-Recio, Laura; Martín-Plasencia, Pilar; Loeches-Alonso, Ángela; Serrano-Rodríguez, Juan M

    2014-05-01

    Facial expression recognition impairment has been reported in Parkinson's disease. While some authors have referred to specific emotional disabilities, others view them as secondary to executive deficits frequently described in the disease, such as working memory. The present study aims to analyze the relationship between working memory and facial expression recognition abilities in Parkinson's disease. We observed 50 patients with Parkinson's disease and 49 healthy controls by means of an n-back procedure with four types of stimuli: emotional facial expressions, gender, spatial locations, and non-sense syllables. Other executive and visuospatial neuropsychological tests were also administered. Results showed that Parkinson's disease patients with high levels of disability performed worse than healthy individuals on the emotional facial expression and spatial location tasks. Moreover, spatial location task performance was correlated with executive neuropsychological scores, but emotional facial expression was not. Thus, working memory seems to be altered in Parkinson's disease, particularly in tasks that involve the appreciation of spatial relationships in stimuli. Additionally, non-executive, facial emotional recognition difficulty seems to be present and related to disease progression.

  1. Emotional Interaction with a Robot Using Facial Expressions, Face Pose and Hand Gestures

    Directory of Open Access Journals (Sweden)

    Myung-Ho Ju

    2012-09-01

    Full Text Available Facial expression is one of the major cues for emotional communications between humans and robots. In this paper, we present emotional human robot interaction techniques using facial expressions combined with an exploration of other useful concepts, such as face pose and hand gesture. For the efficient recognition of facial expressions, it is important to understand the positions of facial feature points. To do this, our technique estimates the 3D positions of each feature point by constructing 3D face models fitted on the user. To construct the 3D face models, we first construct an Active Appearance Model (AAM for variations of the facial expression. Next, we estimate depth information at each feature point from frontal- and side-view images. By combining the estimated depth information with AAM, the 3D face model is fitted on the user according to the various 3D transformations of each feature point. Self-occlusions due to the 3D pose variation are also processed by the region weighting function on the normalized face at each frame. The recognized facial expressions - such as happiness, sadness, fear and anger - are used to change the colours of foreground and background objects in the robot displays, as well as other robot responses. The proposed method displays desirable results in viewing comics with the entertainment robots in our experiments.

  2. A real-time automated system for the recognition of human facial expressions.

    Science.gov (United States)

    Anderson, Keith; McOwan, Peter W

    2006-02-01

    A fully automated, multistage system for real-time recognition of facial expression is presented. The system uses facial motion to characterize monochrome frontal views of facial expressions and is able to operate effectively in cluttered and dynamic scenes, recognizing the six emotions universally associated with unique facial expressions, namely happiness, sadness, disgust, surprise, fear, and anger. Faces are located using a spatial ratio template tracker algorithm. Optical flow of the face is subsequently determined using a real-time implementation of a robust gradient model. The expression recognition system then averages facial velocity information over identified regions of the face and cancels out rigid head motion by taking ratios of this averaged motion. The motion signatures produced are then classified using Support Vector Machines as either nonexpressive or as one of the six basic emotions. The completed system is demonstrated in two simple affective computing applications that respond in real-time to the facial expressions of the user, thereby providing the potential for improvements in the interaction between a computer user and technology.

  3. Development and Standardization of Extended ChaeLee Korean Facial Expressions of Emotions

    Science.gov (United States)

    Lee, Kyoung-Uk; Kim, JiEun; Yeon, Bora; Kim, Seung-Hwan

    2013-01-01

    Objective In recent years there has been an enormous increase of neuroscience research using the facial expressions of emotion. This has led to a need for ethnically specific facial expressions data, due to differences of facial emotion processing among different ethnicities. Methods Fifty professional actors were asked to pose with each of the following facial expressions in turn: happiness, sadness, fear, anger, disgust, surprise, and neutral. A total of 283 facial pictures of 40 actors were selected to be included in the validation study. Facial expression emotion identification was performed in a validation study by 104 healthy raters who provided emotion labeling, valence ratings, and arousal ratings. Results A total of 259 images of 37 actors were selected for inclusion in the Extended ChaeLee Korean Facial Expressions of Emotions tool, based on the analysis of results. In these images, the actors' mean age was 38±11.1 years (range 26-60 years), with 16 (43.2%) males and 21 (56.8%) females. The consistency varied by emotion type, showing the highest for happiness (95.5%) and the lowest for fear (49.0%). The mean scores for the valence ratings ranged from 4.0 (happiness) to 1.9 (sadness, anger, and disgust). The mean scores for the arousal ratings ranged from 3.7 (anger and fear) to 2.5 (neutral). Conclusion We obtained facial expressions from individuals of Korean ethnicity and performed a study to validate them. Our results provide a tool for the affective neurosciences which could be used for the investigation of mechanisms of emotion processing in healthy individuals as well as in patients with various psychiatric disorders. PMID:23798964

  4. Development and Standardization of Extended ChaeLee Korean Facial Expressions of Emotions.

    Science.gov (United States)

    Lee, Kyoung-Uk; Kim, Jieun; Yeon, Bora; Kim, Seung-Hwan; Chae, Jeong-Ho

    2013-06-01

    In recent years there has been an enormous increase of neuroscience research using the facial expressions of emotion. This has led to a need for ethnically specific facial expressions data, due to differences of facial emotion processing among different ethnicities. FIFTY PROFESSIONAL ACTORS WERE ASKED TO POSE WITH EACH OF THE FOLLOWING FACIAL EXPRESSIONS IN TURN: happiness, sadness, fear, anger, disgust, surprise, and neutral. A total of 283 facial pictures of 40 actors were selected to be included in the validation study. Facial expression emotion identification was performed in a validation study by 104 healthy raters who provided emotion labeling, valence ratings, and arousal ratings. A total of 259 images of 37 actors were selected for inclusion in the Extended ChaeLee Korean Facial Expressions of Emotions tool, based on the analysis of results. In these images, the actors' mean age was 38±11.1 years (range 26-60 years), with 16 (43.2%) males and 21 (56.8%) females. The consistency varied by emotion type, showing the highest for happiness (95.5%) and the lowest for fear (49.0%). The mean scores for the valence ratings ranged from 4.0 (happiness) to 1.9 (sadness, anger, and disgust). The mean scores for the arousal ratings ranged from 3.7 (anger and fear) to 2.5 (neutral). We obtained facial expressions from individuals of Korean ethnicity and performed a study to validate them. Our results provide a tool for the affective neurosciences which could be used for the investigation of mechanisms of emotion processing in healthy individuals as well as in patients with various psychiatric disorders.

  5. Emotion Index of Cover Song Music Video Clips based on Facial Expression Recognition

    DEFF Research Database (Denmark)

    Vidakis, Nikolaos; Kavallakis, George; Triantafyllidis, Georgios

    2017-01-01

    This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use of ...... of a neural network system using the features extracted by the SIFT algorithm. Also we support the need of this fusion of different expression recognition algorithms, because of the way that emotions are linked to facial expressions in music video clips.......This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use...

  6. Interpreting Text Messages with Graphic Facial Expression by Deaf and Hearing People

    Directory of Open Access Journals (Sweden)

    Chihiro eSaegusa

    2015-04-01

    Full Text Available In interpreting verbal messages, humans use not only verbal information but also non-verbal signals such as facial expression. For example, when a person says yes with a troubled face, what he or she really means appears ambiguous. In the present study, we examined how deaf and hearing people differ in perceiving real meanings in texts accompanied by representations of facial expression. Deaf and hearing participants were asked to imagine that the face presented on the computer monitor was asked a question from another person (e.g., do you like her?. They observed either a realistic or a schematic face with a different magnitude of positive or negative expression on a computer monitor. A balloon that contained either a positive or negative text response to the question appeared at the same time as the face. Then, participants rated how much the individual on the monitor really meant it (i.e., perceived earnestness, using a 7-point scale. Results showed that the facial expression significantly modulated the perceived earnestness. The influence of positive expression on negative text responses was relatively weaker than that of negative expression on positive responses (i.e., no tended to mean no irrespective of facial expression for both participant groups. However, this asymmetrical effect was stronger in the hearing group. These results suggest that the contribution of facial expression in perceiving real meanings from text messages is qualitatively similar but quantitatively different between deaf and hearing people.

  7. Neurophysiology of spontaneous facial expressions: I. Motor control of the upper and lower face is behaviorally independent in adults.

    Science.gov (United States)

    Ross, Elliott D; Gupta, Smita S; Adnan, Asif M; Holden, Thomas L; Havlicek, Joseph; Radhakrishnan, Sridhar

    2016-03-01

    Facial expressions are described traditionally as monolithic entities. However, humans have the capacity to produce facial blends, in which the upper and lower face simultaneously display different emotional expressions. This, in turn, has led to the Component Theory of facial expressions. Recent neuroanatomical studies in monkeys have demonstrated that there are separate cortical motor areas for controlling the upper and lower face that, presumably, also occur in humans. The lower face is represented on the posterior ventrolateral surface of the frontal lobes in the primary motor and premotor cortices and the upper face is represented on the medial surface of the posterior frontal lobes in the supplementary motor and anterior cingulate cortices. Our laboratory has been engaged in a series of studies exploring the perception and production of facial blends. Using high-speed videography, we began measuring the temporal aspects of facial expressions to develop a more complete understanding of the neurophysiology underlying facial expressions and facial blends. The goal of the research presented here was to determine if spontaneous facial expressions in adults are predominantly monolithic or exhibit independent motor control of the upper and lower face. We found that spontaneous facial expressions are very complex and that the motor control of the upper and lower face is overwhelmingly independent, thus robustly supporting the Component Theory of facial expressions. Seemingly monolithic expressions, be they full facial or facial blends, are most likely the result of a timing coincident rather than a synchronous coordination between the ventrolateral and medial cortical motor areas responsible for controlling the lower and upper face, respectively. In addition, we found evidence that the right and left face may also exhibit independent motor control, thus supporting the concept that spontaneous facial expressions are organized predominantly across the horizontal facial

  8. An optimized ERP brain-computer interface based on facial expression changes

    Science.gov (United States)

    Jin, Jing; Daly, Ian; Zhang, Yu; Wang, Xingyu; Cichocki, Andrzej

    2014-06-01

    Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be

  9. Revisiting the Relationship between the Processing of Gaze Direction and the Processing of Facial Expression

    Science.gov (United States)

    Ganel, Tzvi

    2011-01-01

    There is mixed evidence on the nature of the relationship between the perception of gaze direction and the perception of facial expressions. Major support for shared processing of gaze and expression comes from behavioral studies that showed that observers cannot process expression or gaze and ignore irrelevant variations in the other dimension.…

  10. Does Facial Expressivity Count? How Typically Developing Children Respond Initially to Children with Autism

    Science.gov (United States)

    Stagg, Steven D.; Slavny, Rachel; Hand, Charlotte; Cardoso, Alice; Smith, Pamela

    2014-01-01

    Research investigating expressivity in children with autism spectrum disorder has reported flat affect or bizarre facial expressivity within this population; however, the impact expressivity may have on first impression formation has received little research input. We examined how videos of children with autism spectrum disorder were rated for…

  11. Facial Emotion Recognition and Expression in Parkinson's Disease: An Emotional Mirror Mechanism?

    Science.gov (United States)

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J; Kilner, James

    2017-01-01

    Parkinson's disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. For emotion recognition, PD reported lower score than HC for Ekman total score (pemotions sub-scores happiness, fear, anger, sadness (pemotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all pemotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). PD patients showed difficulties in recognizing emotional facial

  12. Facial Emotion Recognition and Expression in Parkinson’s Disease: An Emotional Mirror Mechanism?

    Science.gov (United States)

    Ricciardi, Lucia; Visco-Comandini, Federica; Erro, Roberto; Morgante, Francesca; Bologna, Matteo; Fasano, Alfonso; Ricciardi, Diego; Edwards, Mark J.; Kilner, James

    2017-01-01

    Background and aim Parkinson’s disease (PD) patients have impairment of facial expressivity (hypomimia) and difficulties in interpreting the emotional facial expressions produced by others, especially for aversive emotions. We aimed to evaluate the ability to produce facial emotional expressions and to recognize facial emotional expressions produced by others in a group of PD patients and a group of healthy participants in order to explore the relationship between these two abilities and any differences between the two groups of participants. Methods Twenty non-demented, non-depressed PD patients and twenty healthy participants (HC) matched for demographic characteristics were studied. The ability of recognizing emotional facial expressions was assessed with the Ekman 60-faces test (Emotion recognition task). Participants were video-recorded while posing facial expressions of 6 primary emotions (happiness, sadness, surprise, disgust, fear and anger). The most expressive pictures for each emotion were derived from the videos. Ten healthy raters were asked to look at the pictures displayed on a computer-screen in pseudo-random fashion and to identify the emotional label in a six-forced-choice response format (Emotion expressivity task). Reaction time (RT) and accuracy of responses were recorded. At the end of each trial the participant was asked to rate his/her confidence in his/her perceived accuracy of response. Results For emotion recognition, PD reported lower score than HC for Ekman total score (pemotions sub-scores happiness, fear, anger, sadness (pemotion expressivity task, PD and HC significantly differed in the total score (p = 0.05) and in the sub-scores for happiness, sadness, anger (all pemotions. There was a significant positive correlation between the emotion facial recognition and expressivity in both groups; the correlation was even stronger when ranking emotions from the best recognized to the worst (R = 0.75, p = 0.004). Conclusions PD patients

  13. Human Empathy, Personality and Experience Affect the Emotion Ratings of Dog and Human Facial Expressions.

    Directory of Open Access Journals (Sweden)

    Miiamaaria V Kujala

    Full Text Available Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people's perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects' personality (the Big Five Inventory, empathy (Interpersonal Reactivity Index and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs' emotional facial expressions.

  14. Human Empathy, Personality and Experience Affect the Emotion Ratings of Dog and Human Facial Expressions

    Science.gov (United States)

    Kujala, Miiamaaria V.; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri

    2017-01-01

    Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people’s perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects’ personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs’ emotional facial expressions. PMID:28114335

  15. Support vector machine-based facial-expression recognition method combining shape and appearance

    Science.gov (United States)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  16. Human Empathy, Personality and Experience Affect the Emotion Ratings of Dog and Human Facial Expressions.

    Science.gov (United States)

    Kujala, Miiamaaria V; Somppi, Sanni; Jokela, Markus; Vainio, Outi; Parkkonen, Lauri

    2017-01-01

    Facial expressions are important for humans in communicating emotions to the conspecifics and enhancing interpersonal understanding. Many muscles producing facial expressions in humans are also found in domestic dogs, but little is known about how humans perceive dog facial expressions, and which psychological factors influence people's perceptions. Here, we asked 34 observers to rate the valence, arousal, and the six basic emotions (happiness, sadness, surprise, disgust, fear, and anger/aggressiveness) from images of human and dog faces with Pleasant, Neutral and Threatening expressions. We investigated how the subjects' personality (the Big Five Inventory), empathy (Interpersonal Reactivity Index) and experience of dog behavior affect the ratings of dog and human faces. Ratings of both species followed similar general patterns: human subjects classified dog facial expressions from pleasant to threatening very similarly to human facial expressions. Subjects with higher emotional empathy evaluated Threatening faces of both species as more negative in valence and higher in anger/aggressiveness. More empathetic subjects also rated the happiness of Pleasant humans but not dogs higher, and they were quicker in their valence judgments of Pleasant human, Threatening human and Threatening dog faces. Experience with dogs correlated positively with ratings of Pleasant and Neutral dog faces. Personality also had a minor effect on the ratings of Pleasant and Neutral faces in both species. The results imply that humans perceive human and dog facial expression in a similar manner, and the perception of both species is influenced by psychological factors of the evaluators. Especially empathy affects both the speed and intensity of rating dogs' emotional facial expressions.

  17. Emotion Index of Cover Song Music Video Clips based on Facial Expression Recognition

    DEFF Research Database (Denmark)

    Kavallakis, George; Vidakis, Nikolaos; Triantafyllidis, Georgios

    2017-01-01

    This paper presents a scheme of creating an emotion index of cover song music video clips by recognizing and classifying facial expressions of the artist in the video. More specifically, it fuses effective and robust algorithms which are employed for expression recognition, along with the use...... of a neural network system using the features extracted by the SIFT algorithm. Also we support the need of this fusion of different expression recognition algorithms, because of the way that emotions are linked to facial expressions in music video clips....

  18. Rhythmic expression of cytochrome P450 epoxygenases CYP4x1 and CYP2c11 in the rat brain and vasculature.

    Science.gov (United States)

    Carver, Koryn A; Lourim, David; Tryba, Andrew K; Harder, David R

    2014-12-01

    Mammals have circadian variation in blood pressure, heart rate, vascular tone, thrombotic tendency, and cerebral blood flow (CBF). These changes may be in part orchestrated by circadian variation in clock gene expression within cells comprising the vasculature that modulate blood flow (e.g., fibroblasts, cerebral vascular smooth muscle cells, astrocytes, and endothelial cells). However, the downstream mechanisms that underlie circadian changes in blood flow are unknown. Cytochrome P450 epoxygenases (Cyp4x1 and Cyp2c11) are expressed in the brain and vasculature and metabolize arachidonic acid (AA) to form epoxyeicosatrienoic acids (EETs). EETs are released from astrocytes, neurons, and vascular endothelial cells and act as potent vasodilators, increasing blood flow. EETs released in response to increases in neural activity evoke a corresponding increase in blood flow known as the functional hyperemic response. We examine the hypothesis that Cyp2c11 and Cyp4x1 expression and EETs production vary in a circadian manner in the rat brain and cerebral vasculature. RT-PCR revealed circadian/diurnal expression of clock and clock-controlled genes as well as Cyp4x1 and Cyp2c11, within the rat hippocampus, middle cerebral artery, inferior vena cava, hippocampal astrocytes and rat brain microvascular endothelial cells. Astrocyte and endothelial cell culture experiments revealed rhythmic variation in Cyp4x1 and Cyp2c11 gene and protein expression with a 12-h period and parallel rhythmic production of EETs. Our data suggest there is circadian regulation of Cyp4x1 and Cyp2c11 gene expression. Such rhythmic EETs production may contribute to circadian changes in blood flow and alter risk of adverse cardiovascular events throughout the day.

  19. 5-HTTLPR modulates the recognition accuracy and exploration of emotional facial expressions

    Directory of Open Access Journals (Sweden)

    Sabrina eBoll

    2014-07-01

    Full Text Available Individual genetic differences in the serotonin transporter-linked polymorphic region (5-HTTLPR have been associated with variations in the sensitivity to social and emotional cues as well as altered amygdala reactivity to facial expressions of emotion. Amygdala activation has further been shown to trigger gaze changes towards diagnostically relevant facial features. The current study examined whether altered socio-emotional reactivity in variants of the 5-HTTLPR promoter polymorphism reflects individual differences in attending to diagnostic features of facial expressions. For this purpose, visual exploration of emotional facial expressions was compared between a low (n=39 and a high (n=40 5-HTT expressing group of healthy human volunteers in an eye tracking paradigm. Emotional faces were presented while manipulating the initial fixation such that saccadic changes towards the eyes and towards the mouth could be identified. We found that the low versus the high 5-HTT group demonstrated greater accuracy with regard to emotion classifications, particularly when faces were presented for a longer duration. No group differences in gaze orientation towards diagnostic facial features could be observed. However, participants in the low 5-HTT group exhibited more and faster fixation changes for certain emotions when faces were presented for a longer duration and overall face fixation times were reduced for this genotype group. These results suggest that the 5-HTT gene influences social perception by modulating the general vigilance to social cues rather than selectively affecting the pre-attentive detection of diagnostic facial features.

  20. [Processing facial identity and emotional expression in normal aging and neurodegenerative diseases].

    Science.gov (United States)

    Chaby, Laurence; Narme, Pauline

    2009-03-01

    The ability to recognize facial identity and emotional facial expression is central to social relationships. This paper reviews studies concerning face recognition and emotional facial expression during normal aging as well as in neurodegenerative diseases occurring in the elderly. It focuses on Alzheimer's disease, frontotemporal and semantic dementia, and also Parkinson's disease. The results of studies on healthy elderly individuals show subtle alterations in the recognition of facial identity and emotional facial expression from the age of 50 years, and increasing after 70. Studies in neurodegenerative diseases show that - during their initial stages - face recognition and facial expression can be specifically affected. Little has been done to assess these difficulties in clinical practice. They could constitute a useful marker for differential diagnosis, especially for the clinical differentiation of Alzheimer's disease (AD) from frontotemporal dementia (FTD). Social difficulties and some behavioural problems observed in these patients may, at least partly, result from these deficits in face processing. Thus, it is important to specify the possible underlying anatomofunctional substrates of these deficits as well as to plan suitable remediation programs.

  1. A facial expression of pax: Assessing children's "recognition" of emotion from faces.

    Science.gov (United States)

    Nelson, Nicole L; Russell, James A

    2016-01-01

    In a classic study, children were shown an array of facial expressions and asked to choose the person who expressed a specific emotion. Children were later asked to name the emotion in the face with any label they wanted. Subsequent research often relied on the same two tasks--choice from array and free labeling--to support the conclusion that children recognize basic emotions from facial expressions. Here five studies (N=120, 2- to 10-year-olds) showed that these two tasks produce illusory recognition; a novel nonsense facial expression was included in the array. Children "recognized" a nonsense emotion (pax or tolen) and two familiar emotions (fear and jealousy) from the same nonsense face. Children likely used a process of elimination; they paired the unknown facial expression with a label given in the choice-from-array task and, after just two trials, freely labeled the new facial expression with the new label. These data indicate that past studies using this method may have overestimated children's expression knowledge. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. P2-35: The KU Facial Expression Database: A Validated Database of Emotional and Conversational Expressions

    Directory of Open Access Journals (Sweden)

    Haenah Lee

    2012-10-01

    Full Text Available Facial expressions are one of the most important means of nonverbal communication transporting both emotional and conversational content. For investigating this large space of expressions we recently developed a large database containing dynamic emotional and conversational expressions in Germany (MPI facial expression database. As facial expressions crucially depend on the cultural context, however, a similar resource is needed for studies outside of Germany. Here, we introduce and validate a new, extensive Korean facial expression database containing dynamic emotional and conversational information. Ten individuals performed 62 expressions following a method-acting protocol, in which each person was asked to imagine themselves in one of 62 corresponding everyday scenarios and to react accordingly. To validate this database, we conducted two experiments: 20 participants were asked to name the appropriate expression for each of the 62 everyday scenarios shown as text. Ten additional participants were asked to name each of the 62 expression videos from 10 actors in addition to rating its naturalness. All naming answers were then rated as valid or invalid. Scenario validation yielded 89% valid answers showing that the scenarios are effective in eliciting appropriate expressions. Video sequences were judged as natural with an average of 66% valid answers. This is an excellent result considering that videos were seen without any conversational context and that 62 expressions were to be recognized. These results validate our Korean database and, as they also parallel the German validation results, will enable detailed cross-cultural comparisons of the complex space of emotional and conversational expressions.

  3. Behavioural dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia

    Directory of Open Access Journals (Sweden)

    Roberta eDaini

    2014-12-01

    Full Text Available Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP, and basic emotional expressions. To this end, we carried out a behavioural study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm, could be identical (neutral to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs’ impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non emotional facial expression (task 1. Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2. These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition

  4. 3D facial expression recognition based on histograms of surface differential quantities

    KAUST Repository

    Li, Huibin

    2011-01-01

    3D face models accurately capture facial surfaces, making it possible for precise description of facial activities. In this paper, we present a novel mesh-based method for 3D facial expression recognition using two local shape descriptors. To characterize shape information of the local neighborhood of facial landmarks, we calculate the weighted statistical distributions of surface differential quantities, including histogram of mesh gradient (HoG) and histogram of shape index (HoS). Normal cycle theory based curvature estimation method is employed on 3D face models along with the common cubic fitting curvature estimation method for the purpose of comparison. Based on the basic fact that different expressions involve different local shape deformations, the SVM classifier with both linear and RBF kernels outperforms the state of the art results on the subset of the BU-3DFE database with the same experimental setting. © 2011 Springer-Verlag.

  5. Psychopathic traits in adolescents and recognition of emotion in facial expressions

    Directory of Open Access Journals (Sweden)

    Silvio José Lemos Vasconcellos

    2014-12-01

    Full Text Available Recent studies have investigated the ability of adult psychopaths and children with psychopathy traits to identify specific facial expressions of emotion. Conclusive results have not yet been found regarding whether psychopathic traits are associated with a specific deficit in the ability of identifying negative emotions such as fear and sadness. This study compared 20 adolescents with psychopathic traits and 21 adolescents without these traits in terms of their ability to recognize facial expressions of emotion using facial stimuli presented during 200 milliseconds, 500 milliseconds, and 1 second expositions. Analyses indicated significant differences between the two groups' performances only for fear and when displayed for 200 ms. This finding is consistent with findings from other studies in the field and suggests that controlling the duration of exposure to affective stimuli in future studies may help to clarify the mechanisms underlying the facial affect recognition deficits of individuals with psychopathic traits.

  6. Amplification of attentional blink by distress-related facial expressions: relationships with alexithymia and affectivity.

    Science.gov (United States)

    Grynberg, Delphine; Vermeulen, Nicolas; Luminet, Olivier

    2014-10-01

    The present studies aimed to analyse the modulatory effect of distressing facial expressions on attention processing. The attentional blink (AB) paradigm is one of the most widely used paradigms for studying temporal attention, and is increasingly applied to study the temporal dynamics of emotion processing. The aims of this study were to investigate how identifying fear and pain facial expressions (Study 1) and fear and anger facial expressions (Study 2) would influence the detection of subsequent stimuli presented within short time intervals, and to assess the moderating influence of alexithymia and affectivity on this effect. It has been suggested that high alexithymia scorers need more attentional resources to process distressing facial expressions and that negative affectivity increases the AB. We showed that fear, anger and pain produced an AB and that alexithymia moderated it such that difficulty in describing feelings (Study 1) and externally oriented thinking (Study 2) were associated with higher interference after the processing of fear and anger at short time presentations. These studies provide evidence that distressing facial expressions modulate the attentional processing at short time intervals and that alexithymia influences the early attentional processing of fear and anger expressions. Controlling for state affect did not change these conclusions. © 2013 International Union of Psychological Science.

  7. Understanding Discrete Facial Expressions in Video Using an Emotion Avatar Image.

    Science.gov (United States)

    Songfan Yang; Bhanu, B

    2012-08-01

    Existing video-based facial expression recognition techniques analyze the geometry-based and appearance-based information in every frame as well as explore the temporal relation among frames. On the contrary, we present a new image-based representation and an associated reference image called the emotion avatar image (EAI), and the avatar reference, respectively. This representation leverages the out-of-plane head rotation. It is not only robust to outliers but also provides a method to aggregate dynamic information from expressions with various lengths. The approach to facial expression analysis consists of the following steps: 1) face detection; 2) face registration of video frames with the avatar reference to form the EAI representation; 3) computation of features from EAIs using both local binary patterns and local phase quantization; and 4) the classification of the feature as one of the emotion type by using a linear support vector machine classifier. Our system is tested on the Facial Expression Recognition and Analysis Challenge (FERA2011) data, i.e., the Geneva Multimodal Emotion Portrayal-Facial Expression Recognition and Analysis Challenge (GEMEP-FERA) data set. The experimental results demonstrate that the information captured in an EAI for a facial expression is a very strong cue for emotion inference. Moreover, our method suppresses the person-specific information for emotion and performs well on unseen data.

  8. Automated Facial Expression Recognition Using Gradient-Based Ternary Texture Patterns

    Directory of Open Access Journals (Sweden)

    Faisal Ahmed

    2013-01-01

    Full Text Available Recognition of human expression from facial image is an interesting research area, which has received increasing attention in the recent years. A robust and effective facial feature descriptor is the key to designing a successful expression recognition system. Although much progress has been made, deriving a face feature descriptor that can perform consistently under changing environment is still a difficult and challenging task. In this paper, we present the gradient local ternary pattern (GLTP—a discriminative local texture feature for representing facial expression. The proposed GLTP operator encodes the local texture of an image by computing the gradient magnitudes of the local neighborhood and quantizing those values in three discrimination levels. The location and occurrence information of the resulting micropatterns is then used as the face feature descriptor. The performance of the proposed method has been evaluated for the person-independent face expression recognition task. Experiments with prototypic expression images from the Cohn-Kanade (CK face expression database validate that the GLTP feature descriptor can effectively encode the facial texture and thus achieves improved recognition performance than some well-known appearance-based facial features.

  9. Long-term academic stress enhances early processing of facial expressions.

    Science.gov (United States)

    Zhang, Liang; Qin, Shaozheng; Yao, Zhuxi; Zhang, Kan; Wu, Jianhui

    2016-11-01

    Exposure to long-term stress can lead to a variety of emotional and behavioral problems. Although widely investigated, the neural basis of how long-term stress impacts emotional processing in humans remains largely elusive. Using event-related brain potentials (ERPs), we investigated the effects of long-term stress on the neural dynamics of emotionally facial expression processing. Thirty-nine male college students undergoing preparation for a major examination and twenty-one matched controls performed a gender discrimination task for faces displaying angry, happy, and neutral expressions. The results of the Perceived Stress Scale showed that participants in the stress group perceived higher levels of long-term stress relative to the control group. ERP analyses revealed differential effects of long-term stress on two early stages of facial expression processing: 1) long-term stress generally augmented posterior P1 amplitudes to facial stimuli irrespective of expression valence, suggesting that stress can increase sensitization to visual inputs in general, and 2) long-term stress selectively augmented fronto-central P2 amplitudes for angry but not for neutral or positive facial expressions, suggesting that stress may lead to increased attentional prioritization to processing negative emotional stimuli. Together, our findings suggest that long-term stress has profound impacts on the early stages of facial expression processing, with an increase at the very early stage of general information inputs and a subsequent attentional bias toward processing emotionally negative stimuli. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Perceptual and affective mechanisms in facial expression recognition: An integrative review.

    Science.gov (United States)

    Calvo, Manuel G; Nummenmaa, Lauri

    2016-09-01

    Facial expressions of emotion involve a physical component of morphological changes in a face and an affective component conveying information about the expresser's internal feelings. It remains unresolved how much recognition and discrimination of expressions rely on the perception of morphological patterns or the processing of affective content. This review of research on the role of visual and emotional factors in expression recognition reached three major conclusions. First, behavioral, neurophysiological, and computational measures indicate that basic expressions are reliably recognized and discriminated from one another, albeit the effect may be inflated by the use of prototypical expression stimuli and forced-choice responses. Second, affective content along the dimensions of valence and arousal is extracted early from facial expressions, although this coarse affective representation contributes minimally to categorical recognition of specific expressions. Third, the physical configuration and visual saliency of facial features contribute significantly to expression recognition, with "emotionless" computational models being able to reproduce some of the basic phenomena demonstrated in human observers. We conclude that facial expression recognition, as it has been investigated in conventional laboratory tasks, depends to a greater extent on perceptual than affective information and mechanisms.

  11. Evidence for Anger Saliency during the Recognition of Chimeric Facial Expressions of Emotions in Underage Ebola Survivors

    Directory of Open Access Journals (Sweden)

    Martina Ardizzi

    2017-06-01

    Full Text Available One of the crucial features defining basic emotions and their prototypical facial expressions is their value for survival. Childhood traumatic experiences affect the effective recognition of facial expressions of negative emotions, normally allowing the recruitment of adequate behavioral responses to environmental threats. Specifically, anger becomes an extraordinarily salient stimulus unbalancing victims’ recognition of negative emotions. Despite the plethora of studies on this topic, to date, it is not clear whether this phenomenon reflects an overall response tendency toward anger recognition or a selective proneness to the salience of specific facial expressive cues of anger after trauma exposure. To address this issue, a group of underage Sierra Leonean Ebola virus disease survivors (mean age 15.40 years, SE 0.35; years of schooling 8.8 years, SE 0.46; 14 males and a control group (mean age 14.55, SE 0.30; years of schooling 8.07 years, SE 0.30, 15 males performed a forced-choice chimeric facial expressions recognition task. The chimeric facial expressions were obtained pairing upper and lower half faces of two different negative emotions (selected from anger, fear and sadness for a total of six different combinations. Overall, results showed that upper facial expressive cues were more salient than lower facial expressive cues. This priority was lost among Ebola virus disease survivors for the chimeric facial expressions of anger. In this case, differently from controls, Ebola virus disease survivors recognized anger regardless of the upper or lower position of the facial expressive cues of this emotion. The present results demonstrate that victims’ performance in the recognition of the facial expression of anger does not reflect an overall response tendency toward anger recognition, but rather the specific greater salience of facial expressive cues of anger. Furthermore, the present results show that traumatic experiences deeply modify

  12. Evidence for Anger Saliency during the Recognition of Chimeric Facial Expressions of Emotions in Underage Ebola Survivors.

    Science.gov (United States)

    Ardizzi, Martina; Evangelista, Valentina; Ferroni, Francesca; Umiltà, Maria A; Ravera, Roberto; Gallese, Vittorio

    2017-01-01

    One of the crucial features defining basic emotions and their prototypical facial expressions is their value for survival. Childhood traumatic experiences affect the effective recognition of facial expressions of negative emotions, normally allowing the recruitment of adequate behavioral responses to environmental threats. Specifically, anger becomes an extraordinarily salient stimulus unbalancing victims' recognition of negative emotions. Despite the plethora of studies on this topic, to date, it is not clear whether this phenomenon reflects an overall response tendency toward anger recognition or a selective proneness to the salience of specific facial expressive cues of anger after trauma exposure. To address this issue, a group of underage Sierra Leonean Ebola virus disease survivors (mean age 15.40 years, SE 0.35; years of schooling 8.8 years, SE 0.46; 14 males) and a control group (mean age 14.55, SE 0.30; years of schooling 8.07 years, SE 0.30, 15 males) performed a forced-choice chimeric facial expressions recognition task. The chimeric facial expressions were obtained pairing upper and lower half faces of two different negative emotions (selected from anger, fear and sadness for a total of six different combinations). Overall, results showed that upper facial expressive cues were more salient than lower facial expressive cues. This priority was lost among Ebola virus disease survivors for the chimeric facial expressions of anger. In this case, differently from controls, Ebola virus disease survivors recognized anger regardless of the upper or lower position of the facial expressive cues of this emotion. The present results demonstrate that victims' performance in the recognition of the facial expression of anger does not reflect an overall response tendency toward anger recognition, but rather the specific greater salience of facial expressive cues of anger. Furthermore, the present results show that traumatic experiences deeply modify the perceptual

  13. Experience-based human perception of facial expressions in Barbary macaques (Macaca sylvanus).

    Science.gov (United States)

    Maréchal, Laëtitia; Levy, Xandria; Meints, Kerstin; Majolo, Bonaventura

    2017-01-01

    Facial expressions convey key cues of human emotions, and may also be important for interspecies interactions. The universality hypothesis suggests that six basic emotions (anger, disgust, fear, happiness, sadness, and surprise) should be expressed by similar facial expressions in close phylogenetic species such as humans and nonhuman primates. However, some facial expressions have been shown to differ in meaning between humans and nonhuman primates like macaques. This ambiguity in signalling emotion can lead to an increased risk of aggression and injuries for both humans and animals. This raises serious concerns for activities such as wildlife tourism where humans closely interact with wild animals. Understanding what factors (i.e., experience and type of emotion) affect ability to recognise emotional state of nonhuman primates, based on their facial expressions, can enable us to test the validity of the universality hypothesis, as well as reduce the risk of aggression and potential injuries in wildlife tourism. The present study investigated whether different levels of experience of Barbary macaques, Macaca sylvanus, affect the ability to correctly assess different facial expressions related to aggressive, distressed, friendly or neutral states, using an online questionnaire. Participants' level of experience was defined as either: (1) naïve: never worked with nonhuman primates and never or rarely encountered live Barbary macaques; (2) exposed: shown pictures of the different Barbary macaques' facial expressions along with the description and the corresponding emotion prior to undertaking the questionnaire; (3) expert: worked with Barbary macaques for at least two months. Experience with Barbary macaques was associated with better performance in judging their emotional state. Simple exposure to pictures of macaques' facial expressions improved the ability of inexperienced participants to better discriminate neutral and distressed faces, and a trend was found for

  14. Experience-based human perception of facial expressions in Barbary macaques (Macaca sylvanus

    Directory of Open Access Journals (Sweden)

    Laëtitia Maréchal

    2017-06-01

    Full Text Available Background Facial expressions convey key cues of human emotions, and may also be important for interspecies interactions. The universality hypothesis suggests that six basic emotions (anger, disgust, fear, happiness, sadness, and surprise should be expressed by similar facial expressions in close phylogenetic species such as humans and nonhuman primates. However, some facial expressions have been shown to differ in meaning between humans and nonhuman primates like macaques. This ambiguity in signalling emotion can lead to an increased risk of aggression and injuries for both humans and animals. This raises serious concerns for activities such as wildlife tourism where humans closely interact with wild animals. Understanding what factors (i.e., experience and type of emotion affect ability to recognise emotional state of nonhuman primates, based on their facial expressions, can enable us to test the validity of the universality hypothesis, as well as reduce the risk of aggression and potential injuries in wildlife tourism. Methods The present study investigated whether different levels of experience of Barbary macaques, Macaca sylvanus, affect the ability to correctly assess different facial expressions related to aggressive, distressed, friendly or neutral states, using an online questionnaire. Participants’ level of experience was defined as either: (1 naïve: never worked with nonhuman primates and never or rarely encountered live Barbary macaques; (2 exposed: shown pictures of the different Barbary macaques’ facial expressions along with the description and the corresponding emotion prior to undertaking the questionnaire; (3 expert: worked with Barbary macaques for at least two months. Results Experience with Barbary macaques was associated with better performance in judging their emotional state. Simple exposure to pictures of macaques’ facial expressions improved the ability of inexperienced participants to better discriminate neutral

  15. Recognition of Facial Expressions of Different Emotional Intensities in Patients with Frontotemporal Lobar Degeneration

    Directory of Open Access Journals (Sweden)

    Roy P. C. Kessels

    2007-01-01

    Full Text Available Behavioural problems are a key feature of frontotemporal lobar degeneration (FTLD. Also, FTLD patients show impairments in emotion processing. Specifically, the perception of negative emotional facial expressions is affected. Generally, however, negative emotional expressions are regarded as more difficult to recognize than positive ones, which thus may have been a confounding factor in previous studies. Also, ceiling effects are often present on emotion recognition tasks using full-blown emotional facial expressions. In the present study with FTLD patients, we examined the perception of sadness, anger, fear, happiness, surprise and disgust at different emotional intensities on morphed facial expressions to take task difficulty into account. Results showed that our FTLD patients were specifically impaired at the recognition of the emotion anger. Also, the patients performed worse than the controls on recognition of surprise, but performed at control levels on disgust, happiness, sadness and fear. These findings corroborate and extend previous results showing deficits in emotion perception in FTLD.

  16. Sex differences in the perception of affective facial expressions: do men really lack emotional sensitivity?

    Science.gov (United States)

    Montagne, Barbara; Kessels, Roy P C; Frigerio, Elisa; de Haan, Edward H F; Perrett, David I

    2005-06-01

    There is evidence that men and women display differences in both cognitive and affective functions. Recent studies have examined the processing of emotions in males and females. However, the findings are inconclusive, possibly the result of methodological differences. The aim of this study was to investigate the perception of emotional facial expressions in men and women. Video clips of neutral faces, gradually morphing into full-blown expressions were used. By doing this, we were able to examine both the accuracy and the sensitivity in labelling emotional facial expressions. Furthermore, all participants completed an anxiety and a depression rating scale. Research participants were 40 female students and 28 male students. Results revealed that men were less accurate, as well as less sensitive in labelling facial expressions. Thus, men show an overall worse performance compared to women on a task measuring the processing of emotional faces. This result is discussed in relation to recent findings.

  17. Feature Extraction for Facial Expression Recognition based on Hybrid Face Regions

    Directory of Open Access Journals (Sweden)

    LAJEVARDI, S.M.

    2009-10-01

    Full Text Available Facial expression recognition has numerous applications, including psychological research, improved human computer interaction, and sign language translation. A novel facial expression recognition system based on hybrid face regions (HFR is investigated. The expression recognition system is fully automatic, and consists of the following modules: face detection, facial detection, feature extraction, optimal features selection, and classification. The features are extracted from both whole face image and face regions (eyes and mouth using log Gabor filters. Then, the most discriminate features are selected based on mutual information criteria. The system can automatically recognize six expressions: anger, disgust, fear, happiness, sadness and surprise. The selected features are classified using the Naive Bayesian (NB classifier. The proposed method has been extensively assessed using Cohn-Kanade database and JAFFE database. The experiments have highlighted the efficiency of the proposed HFR method in enhancing the classification rate.

  18. The Facial Expressive Action Stimulus Test. A test battery for the assessment of face memory, face and object perception, configuration processing, and facial expression recognition.

    Science.gov (United States)

    de Gelder, Beatrice; Huis In 't Veld, Elisabeth M J; Van den Stock, Jan

    2015-01-01

    There are many ways to assess face perception skills. In this study, we describe a novel task battery FEAST (Facial Expressive Action Stimulus Test) developed to test recognition of identity and expressions of human faces as well as stimulus control categories. The FEAST consists of a neutral and emotional face memory task, a face and shoe identity matching task, a face and house part-to-whole matching task, and a human and animal facial expression matching task. The identity and part-to-whole matching tasks contain both upright and inverted conditions. The results provide reference data of a healthy sample of controls in two age groups for future users of the FEAST.

  19. The Facial Expressive Action Stimulus Test. A test battery for the assessment of face memory, face and object perception, configuration processing, and facial expression recognition

    Science.gov (United States)

    de Gelder, Beatrice; Huis in ‘t Veld, Elisabeth M. J.; Van den Stock, Jan

    2015-01-01

    There are many ways to assess face perception skills. In this study, we describe a novel task battery FEAST (Facial Expressive Action Stimulus Test) developed to test recognition of identity and expressions of human faces as well as stimulus control categories. The FEAST consists of a neutral and emotional face memory task, a face and shoe identity matching task, a face and house part-to-whole matching task, and a human and animal facial expression matching task. The identity and part-to-whole matching tasks contain both upright and inverted conditions. The results provide reference data of a healthy sample of controls in two age groups for future users of the FEAST. PMID:26579004

  20. Personality Trait and Facial Expression Filter-Based Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Seongah Chin

    2013-02-01

    Full Text Available In this paper, we present technical approaches that bridge the gap in the research related to the use of brain-computer interfaces for entertainment and facial expressions. Such facial expressions that reflect an individual's personal traits can be used to better realize artificial facial expressions in a gaming environment based on a brain-computer interface. First, an emotion extraction filter is introduced in order to classify emotions on the basis of the users' brain signals in real time. Next, a personality trait filter is defined to classify extrovert and introvert types, which manifest as five traits: very extrovert, extrovert, medium, introvert and very introvert. In addition, facial expressions derived from expression rates are obtained by an extrovert-introvert fuzzy model through its defuzzification process. Finally, we confirm this validation via an analysis of the variance of the personality trait filter, a k-fold cross validation of the emotion extraction filter, an accuracy analysis, a user study of facial synthesis and a test case game.

  1. Facial EMG responses to emotional expressions are related to emotion perception ability.

    Directory of Open Access Journals (Sweden)

    Janina Künecke

    Full Text Available Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a "reactivation" of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG--in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110 in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective.

  2. Attentional avoidance of fearful facial expressions following early life stress is associated with impaired social functioning.

    Science.gov (United States)

    Humphreys, Kathryn L; Kircanski, Katharina; Colich, Natalie L; Gotlib, Ian H

    2016-10-01

    Early life stress is associated with poorer social functioning. Attentional biases in response to threat-related cues, linked to both early experience and psychopathology, may explain this association. To date, however, no study has examined attentional biases to fearful facial expressions as a function of early life stress or examined these biases as a potential mediator of the relation between early life stress and social problems. In a sample of 154 children (ages 9-13 years) we examined the associations among interpersonal early life stressors (i.e., birth through age 6 years), attentional biases to emotional facial expressions using a dot-probe task, and social functioning on the Child Behavior Checklist. High levels of early life stress were associated with both greater levels of social problems and an attentional bias away from fearful facial expressions, even after accounting for stressors occurring in later childhood. No biases were found for happy or sad facial expressions as a function of early life stress. Finally, attentional biases to fearful faces mediated the association between early life stress and social problems. Attentional avoidance of fearful facial expressions, evidenced by a bias away from these stimuli, may be a developmental response to early adversity and link the experience of early life stress to poorer social functioning. © 2016 Association for Child and Adolescent Mental Health.

  3. Deficits in recognizing disgust facial expressions and Internet addiction: Perceived stress as a mediator.

    Science.gov (United States)

    Chen, Zhongting; Poon, Kai-Tak; Cheng, Cecilia

    2017-08-01

    Studies have examined social maladjustment among individuals with Internet addiction, but little is known about their deficits in specific social skills and the underlying psychological mechanisms. The present study filled these gaps by (a) establishing a relationship between deficits in facial expression recognition and Internet addiction, and (b) examining the mediating role of perceived stress that explains this hypothesized relationship. Ninety-seven participants completed validated questionnaires that assessed their levels of Internet addiction and perceived stress, and performed a computer-based task that measured their facial expression recognition. The results revealed a positive relationship between deficits in recognizing disgust facial expression and Internet addiction, and this relationship was mediated by perceived stress. However, the same findings did not apply to other facial expressions. Ad hoc analyses showed that recognizing disgust was more difficult than recognizing other facial expressions, reflecting that the former task assesses a social skill that requires cognitive astuteness. The present findings contribute to the literature by identifying a specific social skill deficit related to Internet addiction and by unveiling a psychological mechanism that explains this relationship, thus providing more concrete guidelines for practitioners to strengthen specific social skills that mitigate both perceived stress and Internet addiction. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.

  4. Facial EMG responses to emotional expressions are related to emotion perception ability.

    Science.gov (United States)

    Künecke, Janina; Hildebrandt, Andrea; Recio, Guillermo; Sommer, Werner; Wilhelm, Oliver

    2014-01-01

    Although most people can identify facial expressions of emotions well, they still differ in this ability. According to embodied simulation theories understanding emotions of others is fostered by involuntarily mimicking the perceived expressions, causing a "reactivation" of the corresponding mental state. Some studies suggest automatic facial mimicry during expression viewing; however, findings on the relationship between mimicry and emotion perception abilities are equivocal. The present study investigated individual differences in emotion perception and its relationship to facial muscle responses - recorded with electromyogram (EMG)--in response to emotional facial expressions. N° = °269 participants completed multiple tasks measuring face and emotion perception. EMG recordings were taken from a subsample (N° = °110) in an independent emotion classification task of short videos displaying six emotions. Confirmatory factor analyses of the m. corrugator supercilii in response to angry, happy, sad, and neutral expressions showed that individual differences in corrugator activity can be separated into a general response to all faces and an emotion-related response. Structural equation modeling revealed a substantial relationship between the emotion-related response and emotion perception ability, providing evidence for the role of facial muscle activation in emotion perception from an individual differences perspective.

  5. Interaction between trait anxiety and trait anger predict amygdala reactivity to angry facial expressions in men but not women

    National Research Council Canada - National Science Library

    Carré, Justin M; Fisher, Patrick M; Manuck, Stephen B; Hariri, Ahmad R

    2012-01-01

    .... Here, we report the novel finding that individual differences in trait anger are positively correlated with bilateral dorsal amygdala reactivity to angry facial expressions, but only among men...

  6. Evidence for Anger Saliency during the Recognition of Chimeric Facial Expressions of Emotions in Underage Ebola Survivors

    National Research Council Canada - National Science Library

    Martina Ardizzi; Valentina Evangelista; Francesca Ferroni; Maria A. Umiltà; Roberto Ravera; Vittorio Gallese

    2017-01-01

    .... Childhood traumatic experiences affect the effective recognition of facial expressions of negative emotions, normally allowing the recruitment of adequate behavioral responses to environmental threats...

  7. The Development of Dynamic Facial Expression Recognition at Different Intensities in 4- to 18-Year-Olds

    Science.gov (United States)

    Montirosso, Rosario; Peverelli, Milena; Frigerio, Elisa; Crespi, Monica; Borgatti, Renato

    2010-01-01

    The primary purpose of this study was to examine the effect of the intensity of emotion expression on children's developing ability to label emotion during a dynamic presentation of five facial expressions (anger, disgust, fear, happiness, and sadness). A computerized task (AFFECT--animated full facial expression comprehension test) was used to…

  8. Recognition of Facial Expressions of Emotion in Adults with Down Syndrome

    Science.gov (United States)

    Virji-Babul, Naznin; Watt, Kimberley; Nathoo, Farouk; Johnson, Peter

    2012-01-01

    Research on facial expressions in individuals with Down syndrome (DS) has been conducted using photographs. Our goal was to examine the effect of motion on perception of emotional expressions. Adults with DS, adults with typical development matched for chronological age (CA), and children with typical development matched for developmental age (DA)…

  9. Discrimination of Facial Expression by 5-Month-Old Infants of Nondepressed and Clinically Depressed Mothers

    OpenAIRE

    Bornstein, Marc H.; Arterberry, Martha; Mash, Clay; Manian, Nanmathi

    2010-01-01

    Five-month-old infants of nondepressed and clinically depressed mothers were habituated to either a face with a neutral expression or the same face with a smile. Infants of nondepressed mothers subsequently discriminated between neutral and smiling facial expressions, whereas infants of clinically depressed mothers failed to make the same discrimination.

  10. The Role of Facial Expressions in Attention-Orienting in Adults and Infants

    Science.gov (United States)

    Rigato, Silvia; Menon, Enrica; Di Gangi, Valentina; George, Nathalie; Farroni, Teresa

    2013-01-01

    Faces convey many signals (i.e., gaze or expressions) essential for interpersonal interaction. We have previously shown that facial expressions of emotion and gaze direction are processed and integrated in specific combinations early in life. These findings open a number of developmental questions and specifically in this paper we address whether…

  11. Strategies for Perceiving Facial Expressions in Adults with Autism Spectrum Disorder

    Science.gov (United States)

    Walsh, Jennifer A.; Vida, Mark D.; Rutherford, M. D.

    2014-01-01

    Rutherford and McIntosh (J Autism Dev Disord 37:187-196, 2007) demonstrated that individuals with autism spectrum disorder (ASD) are more tolerant than controls of exaggerated schematic facial expressions, suggesting that they may use an alternative strategy when processing emotional expressions. The current study was designed to test this finding…

  12. Positive, but Not Negative, Facial Expressions Facilitate 3-Month-Olds' Recognition of an Individual Face

    Science.gov (United States)

    Brenna, Viola; Proietti, Valentina; Montirosso, Rosario; Turati, Chiara

    2013-01-01

    The current study examined whether and how the presence of a positive or a negative emotional expression may affect the face recognition process at 3 months of age. Using a familiarization procedure, Experiment 1 demonstrated that positive (i.e., happiness), but not negative (i.e., fear and anger) facial expressions facilitate infants' ability to…

  13. A novel dataset for real-life evaluation of facial expression recognition methodologies

    NARCIS (Netherlands)

    Siddiqi, Muhammad Hameed; Ali, Maqbool; Idris, Muhammad; Banos Legran, Oresti; Lee, Sungyoung; Choo, Hyunseung

    2016-01-01

    One limitation seen among most of the previous methods is that they were evaluated under settings that are far from real-life scenarios. The reason is that the existing facial expression recognition (FER) datasets are mostly pose-based and assume a predefined setup. The expressions in these datasets

  14. Dynamic and Static Facial Expressions Decoded from Motion-Sensitive Areas in the Macaque Monkey

    Science.gov (United States)

    Furl, Nicholas; Hadj-Bouziane, Fadila; Liu, Ning; Averbeck, Bruno B.; Ungerleider, Leslie G.

    2012-01-01

    Humans adeptly use visual motion to recognize socially-relevant facial information. The macaque provides a model visual system for studying neural coding of expression movements, as its superior temporal sulcus (STS) possesses brain areas selective for faces and areas sensitive to visual motion. We employed functional magnetic resonance imaging and facial stimuli to localize motion-sensitive areas (Mf areas), which responded more to dynamic faces compared to static faces, and face-selective areas, which responded selectively to faces compared to objects and places. Using multivariate analysis, we found that information about both dynamic and static facial expressions could be robustly decoded from Mf areas. By contrast, face-selective areas exhibited relatively less facial expression information. Classifiers trained with expressions from one motion type (dynamic or static) showed poor generalization to the other motion type, suggesting that Mf areas employ separate and non-confusable neural codes for dynamic and static presentations of the same expressions. We also show that some of the motion sensitivity elicited by facial stimuli was not specific to faces but could also be elicited by moving dots, particularly in FST and STPm/LST, confirming their already well-established low-level motion sensitivity. A different pattern was found in anterior STS, which responded more to dynamic than static faces but was not sensitive to dot motion. Overall, we show that emotional expressions are mostly represented outside of face-selective cortex, in areas sensitive to motion. These regions may play a fundamental role in enhancing recognition of facial expression despite the complex stimulus changes associated with motion. PMID:23136433

  15. Unconscious Processing of Facial Expressions in Individuals with Internet Gaming Disorder

    Directory of Open Access Journals (Sweden)

    Xiaozhe Peng

    2017-06-01

    Full Text Available Internet Gaming Disorder (IGD is characterized by impairments in social communication and the avoidance of social contact. Facial expression processing is the basis of social communication. However, few studies have investigated how individuals with IGD process facial expressions, and whether they have deficits in emotional facial processing remains unclear. The aim of the present study was to explore these two issues by investigating the time course of emotional facial processing in individuals with IGD. A backward masking task was used to investigate the differences between individuals with IGD and normal controls (NC in the processing of subliminally presented facial expressions (sad, happy, and neutral with event-related potentials (ERPs. The behavioral results showed that individuals with IGD are slower than NC in response to both sad and neutral expressions in the sad–neutral context. The ERP results showed that individuals with IGD exhibit decreased amplitudes in ERP component N170 (an index of early face processing in response to neutral expressions compared to happy expressions in the happy–neutral expressions context, which might be due to their expectancies for positive emotional content. The NC, on the other hand, exhibited comparable N170 amplitudes in response to both happy and neutral expressions in the happy–neutral expressions context, as well as sad and neutral expressions in the sad–neutral expressions context. Both individuals with IGD and NC showed comparable ERP amplitudes during the processing of sad expressions and neutral expressions. The present study revealed that individuals with IGD have different unconscious neutral facial processing patterns compared with normal individuals and suggested that individuals with IGD may expect more positive emotion in the happy–neutral expressions context.Highlights:• The present study investigated whether the unconscious processing of facial expressions is influenced by

  16. Unconscious Processing of Facial Expressions in Individuals with Internet Gaming Disorder.

    Science.gov (United States)

    Peng, Xiaozhe; Cui, Fang; Wang, Ting; Jiao, Can

    2017-01-01

    Internet Gaming Disorder (IGD) is characterized by impairments in social communication and the avoidance of social contact. Facial expression processing is the basis of social communication. However, few studies have investigated how individuals with IGD process facial expressions, and whether they have deficits in emotional facial processing remains unclear. The aim of the present study was to explore these two issues by investigating the time course of emotional facial processing in individuals with IGD. A backward masking task was used to investigate the differences between individuals with IGD and normal controls (NC) in the processing of subliminally presented facial expressions (sad, happy, and neutral) with event-related potentials (ERPs). The behavioral results showed that individuals with IGD are slower than NC in response to both sad and neutral expressions in the sad-neutral context. The ERP results showed that individuals with IGD exhibit decreased amplitudes in ERP component N170 (an index of early face processing) in response to neutral expressions compared to happy expressions in the happy-neutral expressions context, which might be due to their expectancies for positive emotional content. The NC, on the other hand, exhibited comparable N170 amplitudes in response to both happy and neutral expressions in the happy-neutral expressions context, as well as sad and neutral expressions in the sad-neutral expressions context. Both individuals with IGD and NC showed comparable ERP amplitudes during the processing of sad expressions and neutral expressions. The present study revealed that individuals with IGD have different unconscious neutral facial processing patterns compared with normal individuals and suggested that individuals with IGD may expect more positive emotion in the happy-neutral expressions context. • The present study investigated whether the unconscious processing of facial expressions is influenced by excessive online gaming. A validated

  17. Putting the face in context: Body expressions impact facial emotion processing in human infants

    Directory of Open Access Journals (Sweden)

    Purva Rajhans

    2016-06-01

    Full Text Available Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs. We primed infants with body postures (fearful, happy that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception.

  18. Putting the face in context: Body expressions impact facial emotion processing in human infants.

    Science.gov (United States)

    Rajhans, Purva; Jessen, Sarah; Missana, Manuela; Grossmann, Tobias

    2016-06-01

    Body expressions exert strong contextual effects on facial emotion perception in adults. Specifically, conflicting body cues hamper the recognition of emotion from faces, as evident on both the behavioral and neural level. We examined the developmental origins of the neural processes involved in emotion perception across body and face in 8-month-old infants by measuring event-related brain potentials (ERPs). We primed infants with body postures (fearful, happy) that were followed by either congruent or incongruent facial expressions. Our results revealed that body expressions impact facial emotion processing and that incongruent body cues impair the neural discrimination of emotional facial expressions. Priming effects were associated with attentional and recognition memory processes, as reflected in a modulation of the Nc and Pc evoked at anterior electrodes. These findings demonstrate that 8-month-old infants possess neural mechanisms that allow for the integration of emotion across body and face, providing evidence for the early developmental emergence of context-sensitive facial emotion perception. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. The processing of facial identity and expression is interactive, but dependent on task and experience

    Directory of Open Access Journals (Sweden)

    Alla eYankouskaya

    2014-11-01

    Full Text Available Facial identity and emotional expressions are two important sources of information for daily social interaction. However the link between these two aspects of face processing has been the focus of debate for the past 2 decades. Three polar views have been advocated: (1 there is separate and parallel processing of identity and emotional expression signals derived from faces; (2 there is asymmetric processing with the computation of emotion in faces depending on facial identity coding but not vice versa; and (3 there is integrated processing of facial identity and emotion. Here we present studies primarily using methods from mathematical psychology that formally provide a direct test of the relations between the processing of facial identity and emotion. We focus on the ‘Garner’ paradigm, the composite face effect and divided attention task. We further ask whether the architecture of face-related processes is fixed or flexible and whether it can be shaped by experience. We conclude that formal methods of testing the relations between processes show that the processing of facial identity and expressions interact, and hence are not fully independent. We further demonstrate that the architecture of the relations depends on experience; where experience leads to higher degree of inter-dependence in the processing of identity and expressions.

  20. Singing emotionally: A study of pre-production, production, and post-production facial expressions

    Directory of Open Access Journals (Sweden)

    Lena Rachel Quinto

    2014-04-01

    Full Text Available Singing involves vocal production accompanied by a dynamic and meaningful use of facial expressions, which may serve as ancillary gestures that complement, disambiguate, or reinforce the acoustic signal. In this investigation, we examined the use of facial movements to communicate emotion, focusing on movements arising in three epochs: before vocalisation (pre-production, during vocalisation (production, and immediately after vocalisation (post-production. The stimuli were recordings of seven vocalists’ facial movements as they sang short (14 syllable melodic phrases with the intention of communicating happiness, sadness, irritation, or no emotion. Facial movements were presented as point-light displays to 16 observers who judged the emotion conveyed. Experiment 1 revealed that the accuracy of emotional judgement varied with singer, emotion and epoch. Accuracy was highest in the production epoch, however, happiness was well communicated in the pre-production epoch. In Experiment 2, observers judged point-light displays of exaggerated movements. The ratings suggested that the extent of facial and head movements is largely perceived as a gauge of emotional arousal. In Experiment 3, observers rated point-light displays of scrambled movements. Configural information was removed in these stimuli but velocity and acceleration were retained. Exaggerated scrambled movements were likely to be associated with happiness or irritation whereas unexaggerated scrambled movements were more likely to be identified as neutral. An analysis of the motions of singers revealed systematic changes in facial movement as a function of the emotional intentions of singers. The findings confirm the central role of facial expressions in vocal emotional communication, and highlight individual differences between singers in the amount and intelligibility of facial movements made before, during, and after vocalization.

  1. Intelligent Avatar on E-Learning Using Facial Expression an Haptic

    Directory of Open Access Journals (Sweden)

    Ahmad Hoirul Basori

    2011-04-01

    Full Text Available he process of introducing emotion can be improved through three-dimensional (3D tutoring system. The problem that still not solved is how to provide realistic tutor (avatar in virtual environment. This paper propose an approach to teach children on understanding emotion sensation through facial expression and sense of touch (haptic.The algorithm is created by calculating constant factor (f based on maximum value of RGB and magnitude force then magnitude force range will be associated into particular colour. The Integration process will be started from rendering the facial expression then followed by adjusting the vibration power to emotion value. The result that achieved on experiment, it show around 71% students agree with the classification of magnitude force into emotion representation. Respondents commented that high magnitude force create similar sensation when respondents feel anger, while low magnitude force is more relaxing to respondents. Respondents also said that haptic and facial expression is very interactive and realistic.

  2. Comparative Study on Facial Expression Recognition using Gabor and Dual-Tree Complex Wavelet Transforms

    Directory of Open Access Journals (Sweden)

    Alaa Eleyan

    2017-04-01

    Full Text Available Moving from manually interaction with machines to automated systems, stressed on the importance of facial expression recognition for human computer interaction (HCI. In this article, an investigation and comparative study about the use of complex wavelet transforms for Facial Expression Recognition (FER problem was conducted. Two complex wavelets were used as feature extractors; Gabor wavelets transform (GWT and dual-tree complex wavelets transform (DT-CWT. Extracted feature vectors were fed to principal component analysis (PCA or local binary patterns (LBP. Extensive experiments were carried out using three different databases, namely; JAFFE, CK and MUFE databases. For evaluation of the performance of the system, k-nearest neighbor (kNN, neural networks (NN and support vector machines (SVM classifiers were implemented. The obtained results show that the complex wavelet transform together with sophisticated classifiers can serve as a powerful tool for facial expression recognition problem.

  3. Facial Expression Recognition from Video Sequences Based on Spatial-Temporal Motion Local Binary Pattern and Gabor Multiorientation Fusion Histogram

    Directory of Open Access Journals (Sweden)

    Lei Zhao

    2017-01-01

    Full Text Available This paper proposes novel framework for facial expressions analysis using dynamic and static information in video sequences. First, based on incremental formulation, discriminative deformable face alignment method is adapted to locate facial points to correct in-plane head rotation and break up facial region from background. Then, spatial-temporal motion local binary pattern (LBP feature is extracted and integrated with Gabor multiorientation fusion histogram to give descriptors, which reflect static and dynamic texture information of facial expressions. Finally, a one-versus-one strategy based multiclass support vector machine (SVM classifier is applied to classify facial expressions. Experiments on Cohn-Kanade (CK + facial expression dataset illustrate that integrated framework outperforms methods using single descriptors. Compared with other state-of-the-art methods on CK+, MMI, and Oulu-CASIA VIS datasets, our proposed framework performs better.

  4. Social alienation in schizophrenia patients: association with insula responsiveness to facial expressions of disgust.

    Science.gov (United States)

    Lindner, Christian; Dannlowski, Udo; Walhöfer, Kirsten; Rödiger, Maike; Maisch, Birgit; Bauer, Jochen; Ohrmann, Patricia; Lencer, Rebekka; Zwitserlood, Pienie; Kersting, Anette; Heindel, Walter; Arolt, Volker; Kugel, Harald; Suslow, Thomas

    2014-01-01

    Among the functional neuroimaging studies on emotional face processing in schizophrenia, few have used paradigms with facial expressions of disgust. In this study, we investigated whether schizophrenia patients show less insula activation to macro-expressions (overt, clearly visible expressions) and micro-expressions (covert, very brief expressions) of disgust than healthy controls. Furthermore, departing from the assumption that disgust faces signal social rejection, we examined whether perceptual sensitivity to disgust is related to social alienation in patients and controls. We hypothesized that high insula responsiveness to facial disgust predicts social alienation. We used functional magnetic resonance imaging to measure insula activation in 36 schizophrenia patients and 40 healthy controls. During scanning, subjects passively viewed covert and overt presentations of disgust and neutral faces. To measure social alienation, a social loneliness scale and an agreeableness scale were administered. Schizophrenia patients exhibited reduced insula activation in response to covert facial expressions of disgust. With respect to macro-expressions of disgust, no between-group differences emerged. In patients, insula responsiveness to covert faces of disgust was positively correlated with social loneliness. Furthermore, patients' insula responsiveness to covert and overt faces of disgust was negatively correlated with agreeableness. In controls, insula responsiveness to covert expressions of disgust correlated negatively with agreeableness. Schizophrenia patients show reduced insula responsiveness to micro-expressions but not macro-expressions of disgust compared to healthy controls. In patients, low agreeableness was associated with stronger insula response to micro- and macro-expressions of disgust. Patients with a strong tendency to feel uncomfortable with social interactions appear to be characterized by a high sensitivity for facial expression signaling social

  5. Sex differences in neural activation to facial expressions denoting contempt and disgust.

    Directory of Open Access Journals (Sweden)

    André Aleman

    Full Text Available The facial expression of contempt has been regarded to communicate feelings of moral superiority. Contempt is an emotion that is closely related to disgust, but in contrast to disgust, contempt is inherently interpersonal and hierarchical. The aim of this study was twofold. First, to investigate the hypothesis of preferential amygdala responses to contempt expressions versus disgust. Second, to investigate whether, at a neural level, men would respond stronger to biological signals of interpersonal superiority (e.g., contempt than women. We performed an experiment using functional magnetic resonance imaging (fMRI, in which participants watched facial expressions of contempt and disgust in addition to neutral expressions. The faces were presented as distractors in an oddball task in which participants had to react to one target face. Facial expressions of contempt and disgust activated a network of brain regions, including prefrontal areas (superior, middle and medial prefrontal gyrus, anterior cingulate, insula, amygdala, parietal cortex, fusiform gyrus, occipital cortex, putamen and thalamus. Contemptuous faces did not elicit stronger amygdala activation than did disgusted expressions. To limit the number of statistical comparisons, we confined our analyses of sex differences to the frontal and temporal lobes. Men displayed stronger brain activation than women to facial expressions of contempt in the medial frontal gyrus, inferior frontal gyrus, and superior temporal gyrus. Conversely, women showed stronger neural responses than men to facial expressions of disgust. In addition, the effect of stimulus sex differed for men versus women. Specifically, women showed stronger responses to male contemptuous faces (as compared to female expressions, in the insula and middle frontal gyrus. Contempt has been conceptualized as signaling perceived moral violations of social hierarchy, whereas disgust would signal violations of physical purity. Thus, our

  6. A Multi-Layer Fusion-Based Facial Expression Recognition Approach with Optimal Weighted AUs

    Directory of Open Access Journals (Sweden)

    Xibin Jia

    2017-01-01

    Full Text Available Affective computing is an increasingly important outgrowth of Artificial Intelligence, which is intended to deal with rich and subjective human communication. In view of the complexity of affective expression, discriminative feature extraction and corresponding high-performance classifier selection are still a big challenge. Specific features/classifiers display different performance in different datasets. There has currently been no consensus in the literature that any expression feature or classifier is always good in all cases. Although the recently updated deep learning algorithm, which uses learning deep feature instead of manual construction, appears in the expression recognition research, the limitation of training samples is still an obstacle of practical application. In this paper, we aim to find an effective solution based on a fusion and association learning strategy with typical manual features and classifiers. Taking these typical features and classifiers in facial expression area as a basis, we fully analyse their fusion performance. Meanwhile, to emphasize the major attributions of affective computing, we select facial expression relative Action Units (AUs as basic components. In addition, we employ association rules to mine the relationships between AUs and facial expressions. Based on a comprehensive analysis from different perspectives, we propose a novel facial expression recognition approach that uses multiple features and multiple classifiers embedded into a stacking framework based on AUs. Extensive experiments on two public datasets show that our proposed multi-layer fusion system based on optimal AUs weighting has gained dramatic improvements on facial expression recognition in comparison to an individual feature/classifier and some state-of-the-art methods, including the recent deep learning based expression recognition one.

  7. The Effect of Gender and Age Differences on the Recognition of Emotions from Facial Expressions

    DEFF Research Database (Denmark)

    Schneevogt, Daniela; Paggio, Patrizia

    2016-01-01

    Recent studies have demonstrated gender and cultural differences in the recognition of emotions in facial expressions. However, most studies were conducted on American subjects. In this pa- per, we explore the generalizability of several findings to a non-American culture in the form of Danish...... subjects. We conduct an emotion recognition task followed by two stereotype question- naires with different genders and age groups. While recent findings (Krems et al., 2015) suggest that women are biased to see anger in neutral facial expressions posed by females, in our sample both genders assign higher...

  8. Binary pattern flavored feature extractors for Facial Expression Recognition: An overview

    DEFF Research Database (Denmark)

    Kristensen, Rasmus Lyngby; Tan, Zheng-Hua; Ma, Zhanyu

    2015-01-01

    This paper conducts a survey of modern binary pattern flavored feature extractors applied to the Facial Expression Recognition (FER) problem. In total, 26 different feature extractors are included, of which six are selected for in depth description. In addition, the paper unifies important FER...... terminology, describes open challenges, and provides recommendations to scientific evaluation of FER systems. Lastly, it studies the facial expression recognition accuracy and blur invariance of the Local Frequency Descriptor. The paper seeks to bring together disjointed studies, and the main contribution...

  9. Dynamics of autonomic nervous system responses and facial expressions to odors

    Directory of Open Access Journals (Sweden)

    Wei eHe

    2014-02-01

    Full Text Available Why we like or dislike certain products may be better captured by physiological and behavioral measures of the autonomic nervous system than by conscious or classical sensory tests. Responses to pleasant and unpleasant food odors presented in varying concentrations were assessed continuously using facial expressions and responses of the autonomic nervous system (ANS. Results of 26 young and healthy female participants showed that the unpleasant fish odor triggered higher heart rates and skin conductance responses, lower skin temperature, fewer neutral facial expressions and more disgusted and angry expressions (p < .05. Neutral facial expressions differentiated between odors within 100 ms, after the start of the odor presentation followed by expressions of disgust (180 ms, anger (500 ms, surprised (580 ms, sadness (820 ms, scared (1020 ms, and happy (1780 ms (all p values < .05. Heart rate differentiated between odors after 400 ms, whereas skin conductance responses differentiated between odors after 3920 ms. At shorter intervals (between 520 and 1000 ms and between 2690 and 3880 ms skin temperature for fish was higher than that for orange, but became considerable lower after 5440 ms. This temporal unfolding of emotions in reactions to odors, as seen in facial expressions and physiological measurements supports sequential appraisal theories.

  10. Coordination of gaze, facial expressions and vocalizations of early infant communication with mother and father.

    Science.gov (United States)

    Colonnesi, Cristina; Zijlstra, Bonne J H; van der Zande, Annesophie; Bögels, Susan M

    2012-06-01

    Gaze direction, expressive behaviors and vocalizations are infants' first form of emotional communication. The present study examined the emotional configurations of these three behaviors during face-to-face situations and the effect of infants' and parents' gender. We observed 34 boys and 32 girls (mean age of 18 weeks) during the normal face-to-face interaction with their mother and with their father. Three main behaviors and their temporal co-occurrence were observed: gaze direction at the partner as an indication of infants' attention, positive and negative facial expressions as emotional communication, and vocalizations as first forms of utterances. Pairwise, infants' production of vocalizations, positive facial expressions and gaze were strongly coordinated with each. In addition, the majority of vocalizations produced during positive facial expressions coincided with gaze at the parent. Results on the effect of gender showed that infants (both boys and girls) produced coordinated patterns of positive facial expressions and gaze more often during the interaction with the mother as compared to the interaction with the father. Results contribute to the research on infants' early expression of emotions and gender differences. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Misinterpretation of facial expressions of emotion in verbal adults with autism spectrum disorder.

    Science.gov (United States)

    Eack, Shaun M; Mazefsky, Carla A; Minshew, Nancy J

    2015-04-01

    Facial emotion perception is significantly affected in autism spectrum disorder, yet little is known about how individuals with autism spectrum disorder misinterpret facial expressions that result in their difficulty in accurately recognizing emotion in faces. This study examined facial emotion perception in 45 verbal adults with autism spectrum disorder and 30 age- and gender-matched volunteers without autism spectrum disorder to identify patterns of emotion misinterpretation during face processing that contribute to emotion recognition impairments in autism. Results revealed that difficulty distinguishing emotional from neutral facial expressions characterized much of the emotion perception impairments exhibited by participants with autism spectrum disorder. In particular, adults with autism spectrum disorder uniquely misinterpreted happy faces as neutral, and were significantly more likely than typical volunteers to attribute negative valence to nonemotional faces. The over-attribution of emotions to neutral faces was significantly related to greater communication and emotional intelligence impairments in individuals with autism spectrum disorder. These findings suggest a potential negative bias toward the interpretation of facial expressions and may have implications for interventions designed to remediate emotion perception in autism spectrum disorder. © The Author(s) 2014.

  12. The face-specific N170 component is modulated by emotional facial expression

    Directory of Open Access Journals (Sweden)

    Tottenham Nim

    2007-01-01

    Full Text Available Abstract Background According to the traditional two-stage model of face processing, the face-specific N170 event-related potential (ERP is linked to structural encoding of face stimuli, whereas later ERP components are thought to reflect processing of facial affect. This view has recently been challenged by reports of N170 modulations by emotional facial expression. This study examines the time-course and topography of the influence of emotional expression on the N170 response to faces. Methods Dense-array ERPs were recorded in response to a set (n = 16 of fear and neutral faces. Stimuli were normalized on dimensions of shape, size and luminance contrast distribution. To minimize task effects related to facial or emotional processing, facial stimuli were irrelevant to a primary task of learning associative pairings between a subsequently presented visual character and a spoken word. Results N170 to faces showed a strong modulation by emotional facial expression. A split half analysis demonstrates that this effect was significant both early and late in the experiment and was therefore not associated with only the initial exposures of these stimuli, demonstrating a form of robustness against habituation. The effect of emotional modulation of the N170 to faces did not show significant interaction with the gender of the face stimulus, or hemisphere of recording sites. Subtracting the fear versus neutral topography provided a topography that itself was highly similar to the face N170. Conclusion The face N170 response can be influenced by emotional expressions contained within facial stimuli. The topography of this effect is consistent with the notion that fear stimuli exaggerates the N170 response itself. This finding stands in contrast to previous models suggesting that N170 processes linked to structural analysis of faces precede analysis of emotional expression, and instead may reflect early top-down modulation from neural systems involved in

  13. Spontaneous facial expression in unscripted social interactions can be measured automatically.

    Science.gov (United States)

    Girard, Jeffrey M; Cohn, Jeffrey F; Jeni, Laszlo A; Sayette, Michael A; De la Torre, Fernando

    2015-12-01

    Methods to assess individual facial actions have potential to shed light on important behavioral phenomena ranging from emotion and social interaction to psychological disorders and health. However, manual coding of such actions is labor intensive and requires extensive training. To date, establishing reliable automated coding of unscripted facial actions has been a daunting challenge impeding development of psychological theories and applications requiring facial expression assessment. It is therefore essential that automated coding systems be developed with enough precision and robustness to ease the burden of manual coding in challenging data involving variation in participant gender, ethnicity, head pose, speech, and occlusion. We report a major advance in automated coding of spontaneous facial actions during an unscripted social interaction involving three strangers. For each participant (n = 80, 47 % women, 15 % Nonwhite), 25 facial action units (AUs) were manually coded from video using the Facial Action Coding System. Twelve AUs occurred more than 3 % of the time and were processed using automated FACS coding. Automated coding showed very strong reliability for the proportion of time that each AU occurred (mean intraclass correlation = 0.89), and the more stringent criterion of frame-by-frame reliability was moderate to strong (mean Matthew's correlation = 0.61). With few exceptions, differences in AU detection related to gender, ethnicity, pose, and average pixel intensity were small. Fewer than 6 % of frames could be coded manually but not automatically. These findings suggest automated FACS coding has progressed sufficiently to be applied to observational research in emotion and related areas of study.

  14. Interaction between musical emotion and facial expression as measured by event-related potentials.

    Science.gov (United States)

    Kamiyama, Keiko S; Abla, Dilshat; Iwanaga, Koichi; Okanoya, Kazuo

    2013-02-01

    We examined the integrative process between emotional facial expressions and musical excerpts by using an affective priming paradigm. Happy or sad musical stimuli were presented after happy or sad facial images during electroencephalography (EEG) recordings. We asked participants to judge the affective congruency of the presented face-music pairs. The congruency of emotionally congruent pairs was judged more rapidly than that of incongruent pairs. In addition, the EEG data showed that incongruent musical targets elicited a larger N400 component than congruent pairs. Furthermore, these effects occurred in nonmusicians as well as musicians. In sum, emotional integrative processing of face-music pairs was facilitated in congruent music targets and inhibited in incongruent music targets; this process was not significantly modulated by individual musical experience. This is the first study on musical stimuli primed by facial expressions to demonstrate that the N400 component reflects the affective priming effect. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Seeing a haptically explored face: visual facial-expression aftereffect from haptic adaptation to a face.

    Science.gov (United States)

    Matsumiya, Kazumichi

    2013-10-01

    Current views on face perception assume that the visual system receives only visual facial signals. However, I show that the visual perception of faces is systematically biased by adaptation to a haptically explored face. Recently, face aftereffects (FAEs; the altered perception of faces after adaptation to a face) have been demonstrated not only in visual perception but also in haptic perception; therefore, I combined the two FAEs to examine whether the visual system receives face-related signals from the haptic modality. I found that adaptation to a haptically explored facial expression on a face mask produced a visual FAE for facial expression. This cross-modal FAE was not due to explicitly imaging a face, response bias, or adaptation to local features. Furthermore, FAEs transferred from vision to haptics. These results indicate that visual face processing depends on substrates adapted by haptic faces, which suggests that face processing relies on shared representation underlying cross-modal interactions.

  16. Face or body? Oxytocin improves perception of emotions from facial expressions in incongruent emotional body context.

    Science.gov (United States)

    Perry, Anat; Aviezer, Hillel; Goldstein, Pavel; Palgi, Sharon; Klein, Ehud; Shamay-Tsoory, Simone G

    2013-11-01

    The neuropeptide oxytocin (OT) has been repeatedly reported to play an essential role in the regulation of social cognition in humans in general, and specifically in enhancing the recognition of emotions from facial expressions. The later was assessed in different paradigms that rely primarily on isolated and decontextualized emotional faces. However, recent evidence has indicated that the perception of basic facial expressions is not context invariant and can be categorically altered by context, especially body context, at early perceptual levels. Body context has a strong effect on our perception of emotional expressions, especially when the actual target face and the contextually expected face are perceptually similar. To examine whether and how OT affects emotion recognition, we investigated the role of OT in categorizing facial expressions in incongruent body contexts. Our results show that in the combined process of deciphering emotions from facial expressions and from context, OT gives an advantage to the face. This advantage is most evident when the target face and the contextually expected face are perceptually similar. Copyright © 2013 Elsevier Ltd. All rights reserved.

  17. The understanding of the emotional meaning of facial expressions in people with autism.

    Science.gov (United States)

    Celani, G; Battacchi, M W; Arcidiacono, L

    1999-02-01

    Ten autistic individuals (mean age: 12.7 years, SD 3.8, range 5.10-16.0), 10 Down individuals (12.3 years, SD 3.0, range 7.1-16.0), and a control group of 10 children with normal development (mean age: 6.3 years, SD 1.6, range 4.0-9.4), matched for verbal mental age, were tested on a delayed-matching task and on a sorting-by-preference task. The first task required subjects to match faces on the basis of the emotion being expressed or on the basis of identity. Different from the typical simultaneous matching procedure the target picture was shortly presented (750 msec) and was not visible when the sample pictures were shown to the subject, thus reducing the possible use of perceptual, piecemeal, processing strategies based on the typical features of the emotional facial expression. In the second task, subjects were required to rate the valence of an isolated stimulus, such as facial expression of emotion or an emotional situation in which no people were represented. The aim of the second task was to compare the autistic and nonautistic children's tendency to judge pleasantness of a face using facial expression of emotion as a meaningful index. Results showed a significantly worse performance in autistic individuals than in both normal and Down subjects on both facial expression of emotion subtasks, although on the identity and emotional situation subtasks there were no significant differences between groups.

  18. Facial expression identification using 3D geometric features from Microsoft Kinect device

    Science.gov (United States)

    Han, Dongxu; Al Jawad, Naseer; Du, Hongbo

    2016-05-01

    Facial expression identification is an important part of face recognition and closely related to emotion detection from face images. Various solutions have been proposed in the past using different types of cameras and features. Microsoft Kinect device has been widely used for multimedia interactions. More recently, the device has been increasingly deployed for supporting scientific investigations. This paper explores the effectiveness of using the device in identifying emotional facial expressions such as surprise, smile, sad, etc. and evaluates the usefulness of 3D data points on a face mesh structure obtained from the Kinect device. We present a distance-based geometric feature component that is derived from the distances between points on the face mesh and selected reference points in a single frame. The feature components extracted across a sequence of frames starting and ending by neutral emotion represent a whole expression. The feature vector eliminates the need for complex face orientation correction, simplifying the feature extraction process and making it more efficient. We applied the kNN classifier that exploits a feature component based similarity measure following the principle of dynamic time warping to determine the closest neighbors. Preliminary tests on a small scale database of different facial expressions show promises of the newly developed features and the usefulness of the Kinect device in facial expression identification.

  19. Effects of exposure to facial expression variation in face learning and recognition.

    Science.gov (United States)

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.

  20. Validation of computer simulations for effects of eye gaze, sex, facial expression, and posture on perceived threat.

    Science.gov (United States)

    Stamps, Arthur

    2011-06-01

    Two experiments were done to ascertain how well computer images of people communicated threat through the nonverbal cues of eye gaze, sex, facial expression, and posture. Results indicated the computer images produced valid and generalizable results. The strongest effects on threat were found for facial expression and posture. Smaller effects were found for sex and direction of eye gaze.

  1. When a smile becomes a fist: the perception of facial and bodily expressions of emotion in violent offenders

    NARCIS (Netherlands)

    Kret, M.E.; de Gelder, B.

    2013-01-01

    Previous reports have suggested an enhancement of facial expression recognition in women as compared to men. It has also been suggested that men versus women have a greater attentional bias towards angry cues. Research has shown that facial expression recognition impairments and attentional biases

  2. Social Adjustment, Academic Adjustment, and the Ability to Identify Emotion in Facial Expressions of 7-Year-Old Children

    Science.gov (United States)

    Goodfellow, Stephanie; Nowicki, Stephen, Jr.

    2009-01-01

    The authors aimed to examine the possible association between (a) accurately reading emotion in facial expressions and (b) social and academic competence among elementary school-aged children. Participants were 840 7-year-old children who completed a test of the ability to read emotion in facial expressions. Teachers rated children's social and…

  3. Emotional Facial and Vocal Expressions during Story Retelling by Children and Adolescents with High-Functioning Autism

    Science.gov (United States)

    Grossman, Ruth B.; Edelson, Lisa R.; Tager-Flusberg, Helen

    2013-01-01

    Purpose: People with high-functioning autism (HFA) have qualitative differences in facial expression and prosody production, which are rarely systematically quantified. The authors' goals were to qualitatively and quantitatively analyze prosody and facial expression productions in children and adolescents with HFA. Method: Participants were 22…

  4. The Role of Prefrontal Inhibition in Regulating Facial Expressions of Pain : A Repetitive Transcranial Magnetic Stimulation Study

    NARCIS (Netherlands)

    Karmann, Anna Julia; Maihoefner, Christian; Lautenbacher, Stefan; Sperling, Wolfgang; Kornhuber, Johannes; Kunz, Miriam

    Although research on facial expressions of pain has a long history, little is known about the cerebral mechanisms regulating these expressions. It has been suggested that the medial prefrontal cortex (mPFC) might be involved in regulating/inhibiting the degree to which-pain is facially displayed. To

  5. The Relative Power of an Emotion's Facial Expression, Label, and Behavioral Consequence to Evoke Preschoolers' Knowledge of Its Cause

    Science.gov (United States)

    Widen, Sherri C.; Russell, James A.

    2004-01-01

    Lay people and scientists alike assume that, especially for young children, facial expressions are a strong cue to another's emotion. We report a study in which children (N=120; 3-4 years) described events that would cause basic emotions (surprise, fear, anger, disgust, sadness) presented as its facial expression, as its label, or as its…

  6. Reduced expression of regeneration associated genes in chronically axotomized facial motoneurons.

    Science.gov (United States)

    Gordon, T; You, S; Cassar, S L; Tetzlaff, W

    2015-02-01

    Chronically axotomized motoneurons progressively fail to regenerate their axons. Since axonal regeneration is associated with the increased expression of tubulin, actin and GAP-43, we examined whether the regenerative failure is due to failure of chronically axotomized motoneurons to express and sustain the expression of these regeneration associated genes (RAGs). Chronically axotomized facial motoneurons were subjected to a second axotomy to mimic the clinical surgical procedure of refreshing the proximal nerve stump prior to nerve repair. Expression of α1-tubulin, actin and GAP-43 was analyzed in axotomized motoneurons using in situ hybridization followed by autoradiography and silver grain quantification. The expression of these RAGs by acutely axotomized motoneurons declined over several months. The chronically injured motoneurons responded to a refreshment axotomy with a re-increase in RAG expression. However, this response to a refreshment axotomy of chronically injured facial motoneurons was less than that seen in acutely axotomized facial motoneurons. These data demonstrate that the neuronal RAG expression can be induced by injury-related signals and does not require acute deprivation of target derived factors. The transient expression is consistent with a transient inflammatory response to the injury. We conclude that transient RAG expression in chronically axotomized motoneurons and the weak response of the chronically axotomized motoneurons to a refreshment axotomy provides a plausible explanation for the progressive decline in regenerative capacity of chronically axotomized motoneurons. Crown Copyright © 2014. Published by Elsevier Inc. All rights reserved.

  7. Beyond face value: does involuntary emotional anticipation shape the perception of dynamic facial expressions?

    Directory of Open Access Journals (Sweden)

    Letizia Palumbo

    Full Text Available Emotional facial expressions are immediate indicators of the affective dispositions of others. Recently it has been shown that early stages of social perception can already be influenced by (implicit attributions made by the observer about the agent's mental state and intentions. In the current study possible mechanisms underpinning distortions in the perception of dynamic, ecologically-valid, facial expressions were explored. In four experiments we examined to what extent basic perceptual processes such as contrast/context effects, adaptation and representational momentum underpinned the perceptual distortions, and to what extent 'emotional anticipation', i.e. the involuntary anticipation of the other's emotional state of mind on the basis of the immediate perceptual history, might have played a role. Neutral facial expressions displayed at the end of short video-clips, in which an initial facial expression of joy or anger gradually morphed into a neutral expression, were misjudged as being slightly angry or slightly happy, respectively (Experiment 1. This response bias disappeared when the actor's identity changed in the final neutral expression (Experiment 2. Videos depicting neutral-to-joy-to-neutral and neutral-to-anger-to-neutral sequences again produced biases but in opposite direction (Experiment 3. The bias survived insertion of a 400 ms blank (Experiment 4. These results suggested that the perceptual distortions were not caused by any of the low-level perceptual mechanisms (adaptation, representational momentum and contrast effects. We speculate that especially when presented with dynamic, facial expressions, perceptual distortions occur that reflect 'emotional anticipation' (a low-level mindreading mechanism, which overrules low-level visual mechanisms. Underpinning neural mechanisms are discussed in relation to the current debate on action and emotion understanding.

  8. Neural evidence for cultural differences in the valuation of positive facial expressions.

    Science.gov (United States)

    Park, BoKyung; Tsai, Jeanne L; Chim, Louise; Blevins, Elizabeth; Knutson, Brian

    2016-02-01

    European Americans value excitement more and calm less than Chinese. Within cultures, European Americans value excited and calm states similarly, whereas Chinese value calm more than excited states. To examine how these cultural differences influence people's immediate responses to excited vs calm facial expressions, we combined a facial rating task with functional magnetic resonance imaging. During scanning, European American (n = 19) and Chinese (n = 19) females viewed and rated faces that varied by expression (excited, calm), ethnicity (White, Asian) and gender (male, female). As predicted, European Americans showed greater activity in circuits associated with affect and reward (bilateral ventral striatum, left caudate) while viewing excited vs calm expressions than did Chinese. Within cultures, European Americans responded to excited vs calm expressions similarly, whereas Chinese showed greater activity in these circuits in response to calm vs excited expressions regardless of targets' ethnicity or gender. Across cultural groups, greater ventral striatal activity while viewing excited vs. calm expressions predicted greater preference for excited vs calm expressions months later. These findings provide neural evidence that people find viewing the specific positive facial expressions valued by their cultures to be rewarding and relevant. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  9. Influence of Emotional Facial Expressions on 3-5-Year-Olds' Face Recognition

    Science.gov (United States)

    Freitag, Claudia; Schwarzer, Gudrun

    2011-01-01

    Three experiments examined 3- and 5-year-olds' recognition of faces in constant and varied emotional expressions. Children were asked to identify repeatedly presented target faces, distinguishing them from distractor faces, during an immediate recognition test and during delayed assessments after 10 min and one week. Emotional facial expression…

  10. The Effects of Facial Expression and Posture on Children's Reported Responses to Teacher Nonverbal Communication.

    Science.gov (United States)

    Neill, S. R. St J.

    1989-01-01

    Examines the effects of facial expression and posture of teachers on the reactions of schoolchildren. Finds that smiling and frowning have strong effects while the effects of posture and gesture are weaker. Reports that touch and explaining gestures are seen as positive by the students and controlling gestures are viewed as negative. (KO)

  11. Attention for emotional facial expressions in dysphoria: an eye-movement registration study.

    Science.gov (United States)

    Leyman, Lemke; De Raedt, Rudi; Vaeyens, Roel; Philippaerts, Renaat M

    2011-01-01

    Former research demonstrated that depression is associated with dysfunctional attentional processing of emotional information. Most studies examined this bias by registration of response latencies. The present study employed an ecologically valid measurement of attentive processing, using eye-movement registration. Dysphoric and non-dysphoric participants viewed slides presenting sad, angry, happy and neutral facial expressions. For each type of expression, three components of visual attention were analysed: the relative fixation frequency, fixation time and glance duration. Attentional biases were also investigated for inverted facial expressions to ensure that they were not related to eye-catching facial features. Results indicated that non-dysphoric individuals were characterised by longer fixating and dwelling on happy faces. Dysphoric individuals demonstrated a longer dwelling on sad and neutral faces. These results were not found for inverted facial expressions. The present findings are in line with the assumption that depression is associated with a prolonged attentional elaboration on negative information. © 2010 Psychology Press, an imprint of the Taylor & Francis Group, an Informa business

  12. Abnormal Amygdala and Prefrontal Cortex Activation to Facial Expressions in Pediatric Bipolar Disorder

    Science.gov (United States)

    Garrett, Amy S.; Reiss, Allan L.; Howe, Meghan E.; Kelley, Ryan G.; Singh, Manpreet K.; Adleman, Nancy E.; Karchemskiy, Asya; Chang, Kiki D.

    2012-01-01

    Objective: Previous functional magnetic resonance imaging (fMRI) studies in pediatric bipolar disorder (BD) have reported greater amygdala and less dorsolateral prefrontal cortex (DLPFC) activation to facial expressions compared to healthy controls. The current study investigates whether these differences are associated with the early or late…

  13. Development and validation of an Argentine set of facial expressions of emotion

    NARCIS (Netherlands)

    Vaiman, M.; Wagner, M.A.; Caicedo, E.; Pereno, G.L.

    2017-01-01

    Pictures of facial expressions of emotion are used in a wide range of experiments. The last decade has seen an increase in the number of studies presenting local sets of emotion stimuli. However, only a few existing sets contain pictures of Latin Americans, despite the growing attention emotion

  14. Static and dynamic 3D facial expression recognition: A comprehensive survey

    NARCIS (Netherlands)

    Sandbach, G.; Zafeiriou, S.; Pantic, Maja; Yin, Lijun

    2012-01-01

    Automatic facial expression recognition constitutes an active research field due to the latest advances in computing technology that make the user's experience a clear priority. The majority of work conducted in this area involves 2D imagery, despite the problems this presents due to inherent pose

  15. Spatiotemporal neural network dynamics for the processing of dynamic facial expressions

    Science.gov (United States)

    Sato, Wataru; Kochiyama, Takanori; Uono, Shota

    2015-01-01

    The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150–200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300–350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual–motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions. PMID:26206708

  16. Processing of Facial Expressions of Emotions by Adults with Down Syndrome and Moderate Intellectual Disability

    Science.gov (United States)

    Carvajal, Fernando; Fernandez-Alcaraz, Camino; Rueda, Maria; Sarrion, Louise

    2012-01-01

    The processing of facial expressions of emotions by 23 adults with Down syndrome and moderate intellectual disability was compared with that of adults with intellectual disability of other etiologies (24 matched in cognitive level and 26 with mild intellectual disability). Each participant performed 4 tasks of the Florida Affect Battery and an…

  17. Does Facial Expression Recognition Provide a Toehold for the Development of Emotion Understanding?

    Science.gov (United States)

    Strand, Paul S.; Downs, Andrew; Barbosa-Leiker, Celestina

    2016-01-01

    The authors explored predictions from basic emotion theory (BET) that facial emotion expression recognition skills are insular with respect to their own development, and yet foundational to the development of emotional perspective-taking skills. Participants included 417 preschool children for whom estimates of these 2 emotion understanding…

  18. The Effects of Early Institutionalization on the Discrimination of Facial Expressions of Emotion in Young Children

    Science.gov (United States)

    Jeon, Hana; Moulson, Margaret C.; Fox, Nathan; Zeanah, Charles; Nelson, Charles A., III

    2010-01-01

    The current study examined the effects of institutionalization on the discrimination of facial expressions of emotion in three groups of 42-month-old children. One group consisted of children abandoned at birth who were randomly assigned to Care-as-Usual (institutional care) following a baseline assessment. Another group consisted of children…

  19. Inferior Frontal Gyrus Activity Triggers Anterior Insula Response to Emotional Facial Expressions

    NARCIS (Netherlands)

    Jabbi, Mbemba; Keysers, Christian

    2008-01-01

    The observation of movies of facial expressions of others has been shown to recruit similar areas involved in experiencing one's own emotions: the inferior frontal gyrus (IFG). the anterior insula and adjacent frontal operculum (IFO). The Causal link bet between activity in these 2 regions,

  20. Effects of Context and Facial Expression on Imitation Tasks in Preschool Children with Autism

    Science.gov (United States)

    Markodimitraki, Maria; Kypriotaki, Maria; Ampartzaki, Maria; Manolitsis, George

    2013-01-01

    The present study explored the effect of the context in which an imitation act occurs (elicited/spontaneous) and the experimenter's facial expression (neutral or smiling) during the imitation task with young children with autism and typically developing children. The participants were 10 typically developing children and 10 children with autism…

  1. Assessment of Learners' Attention to E-Learning by Monitoring Facial Expressions for Computer Network Courses

    Science.gov (United States)

    Chen, Hong-Ren

    2012-01-01

    Recognition of students' facial expressions can be used to understand their level of attention. In a traditional classroom setting, teachers guide the classes and continuously monitor and engage the students to evaluate their understanding and progress. Given the current popularity of e-learning environments, it has become important to assess the…

  2. Children's Scripts for Social Emotions: Causes and Consequences Are More Central than Are Facial Expressions

    Science.gov (United States)

    Widen, Sherri C.; Russell, James A.

    2010-01-01

    Understanding and recognition of emotions relies on emotion concepts, which are narrative structures (scripts) specifying facial expressions, causes, consequences, label, etc. organized in a temporal and causal order. Scripts and their development are revealed by examining which components better tap which concepts at which ages. This study…

  3. Facial Expression Recognition: Can Preschoolers with Cochlear Implants and Hearing Aids Catch It?

    Science.gov (United States)

    Wang, Yifang; Su, Yanjie; Fang, Ping; Zhou, Qingxia

    2011-01-01

    Tager-Flusberg and Sullivan (2000) presented a cognitive model of theory of mind (ToM), in which they thought ToM included two components--a social-perceptual component and a social-cognitive component. Facial expression recognition (FER) is an ability tapping the social-perceptual component. Previous findings suggested that normal hearing…

  4. Facial Expression of Affect in Children with Cornelia de Lange Syndrome

    Science.gov (United States)

    Collis, L.; Moss, J.; Jutley, J.; Cornish, K.; Oliver, C.

    2008-01-01

    Background: Individuals with Cornelia de Lange syndrome (CdLS) have been reported to show comparatively high levels of flat and negative affect but there have been no empirical evaluations. In this study, we use an objective measure of facial expression to compare affect in CdLS with that seen in Cri du Chat syndrome (CDC) and a group of…

  5. Generation of facial expressions from emotion using a fuzzy rule based system

    NARCIS (Netherlands)

    Bui, T.D.; Heylen, Dirk K.J.; Poel, Mannes; Nijholt, Antinus; Stumptner, Markus; Corbett, Dan; Brooks, Mike

    2001-01-01

    We propose a fuzzy rule-based system to map representations of the emotional state of an animated agent onto muscle contraction values for the appropriate facial expressions. Our implementation pays special attention to the way in which continuous changes in the intensity of emotions can be

  6. Understanding Emotions from Standardized Facial Expressions in Autism and Normal Development

    Science.gov (United States)

    Castelli, Fulvia

    2005-01-01

    The study investigated the recognition of standardized facial expressions of emotion (anger, fear, disgust, happiness, sadness, surprise) at a perceptual level (experiment 1) and at a semantic level (experiments 2 and 3) in children with autism (N= 20) and normally developing children (N= 20). Results revealed that children with autism were as…

  7. Recognition of Emotional and Nonemotional Facial Expressions: A Comparison between Williams Syndrome and Autism

    Science.gov (United States)

    Lacroix, Agnes; Guidetti, Michele; Roge, Bernadette; Reilly, Judy

    2009-01-01

    The aim of our study was to compare two neurodevelopmental disorders (Williams syndrome and autism) in terms of the ability to recognize emotional and nonemotional facial expressions. The comparison of these two disorders is particularly relevant to the investigation of face processing and should contribute to a better understanding of social…

  8. If looks could talk : A system for automatic analysis of facial expressions

    NARCIS (Netherlands)

    Pantic, M.; Rothkrantz, J.M.; De Boo, M.

    2001-01-01

    Researchers at Delft University of Technology have developed a computer system capable of recognising human facial expressions from images. The system can distinguish between 30 different units such as a smile, or raised eyebrows. It is intended for use by behavioural researchers and the medical

  9. Exogenous cortisol shifts a motivated bias from fear to anger in spatial working memory for facial expressions.

    NARCIS (Netherlands)

    Putman, P.L.J.; Hermans, E.J.; Honk, E.J. van

    2007-01-01

    Studies assessing processing of facial expressions have established that cortisol levels, emotional traits, and affective disorders predict selective responding to these motivationally relevant stimuli in expression specific manners. For instance, increased attentional processing of fearful faces

  10. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study.

    Science.gov (United States)

    Liu, Tongran; Xiao, Tong; Li, Xiaoyan; Shi, Jiannong

    2015-01-01

    The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8) and happy expressions were deviant stimuli (p = 0.2), and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8) and fearful expressions were deviant stimuli (p = 0.2). Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs) were obtained during the tasks. The visual mismatch negativity (vMMN) components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50-130 ms), the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320-450 ms), the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information.

  11. Fluid Intelligence and Automatic Neural Processes in Facial Expression Perception: An Event-Related Potential Study.

    Directory of Open Access Journals (Sweden)

    Tongran Liu

    Full Text Available The relationship between human fluid intelligence and social-emotional abilities has been a topic of considerable interest. The current study investigated whether adolescents with different intellectual levels had different automatic neural processing of facial expressions. Two groups of adolescent males were enrolled: a high IQ group and an average IQ group. Age and parental socioeconomic status were matched between the two groups. Participants counted the numbers of the central cross changes while paired facial expressions were presented bilaterally in an oddball paradigm. There were two experimental conditions: a happy condition, in which neutral expressions were standard stimuli (p = 0.8 and happy expressions were deviant stimuli (p = 0.2, and a fearful condition, in which neutral expressions were standard stimuli (p = 0.8 and fearful expressions were deviant stimuli (p = 0.2. Participants were required to concentrate on the primary task of counting the central cross changes and to ignore the expressions to ensure that facial expression processing was automatic. Event-related potentials (ERPs were obtained during the tasks. The visual mismatch negativity (vMMN components were analyzed to index the automatic neural processing of facial expressions. For the early vMMN (50-130 ms, the high IQ group showed more negative vMMN amplitudes than the average IQ group in the happy condition. For the late vMMN (320-450 ms, the high IQ group had greater vMMN responses than the average IQ group over frontal and occipito-temporal areas in the fearful condition, and the average IQ group evoked larger vMMN amplitudes than the high IQ group over occipito-temporal areas in the happy condition. The present study elucidated the close relationships between fluid intelligence and pre-attentive change detection on social-emotional information.

  12. A new approach to measuring individual differences in sensitivity to facial expressions: influence of temperamental shyness and sociability

    Science.gov (United States)

    Gao, Xiaoqing; Chiesa, Julia; Maurer, Daphne; Schmidt, Louis A.

    2014-01-01

    To examine individual differences in adults’ sensitivity to facial expressions, we used a novel method that has proved revealing in studies of developmental change. Using static faces morphed to show different intensities of facial expressions, we calculated two measures: (1) the threshold to detect that a low intensity facial expression is different from neutral, and (2) accuracy in recognizing the specific facial expression in faces above the detection threshold. We conducted two experiments with young adult females varying in reported temperamental shyness and sociability – the former trait is known to influence the recognition of facial expressions during childhood. In both experiments, the measures had good split half reliability. Because shyness was significantly negatively correlated with sociability, we used partial correlations to examine the relation of each to sensitivity to facial expressions. Sociability was negatively related to threshold to detect fear (Experiment 1) and to misidentify fear as another expression or happy expressions as fear (Experiment 2). Both patterns are consistent with hypervigilance by less sociable individuals. Shyness was positively related to misidentification of fear as another emotion (Experiment 2), a pattern consistent with a history of avoidance. We discuss the advantages and limitations of this new approach for studying individual differences in sensitivity to facial expressions. PMID:24550857

  13. A New Approach to Measuring Individual Differences in Sensitivity to Facial Expressions: Influence of Temperamental Shyness and Sociability

    Directory of Open Access Journals (Sweden)

    Xiaoqing eGao

    2014-02-01

    Full Text Available To examine individual differences in adults’ sensitivity to facial expressions, we used a novel method that has proved revealing in studies of developmental change. Using static faces morphed to show different intensities of facial expressions, we calculated two measures: (1 the threshold to detect that a low intensity facial expression is different from neutral, and (2 accuracy in recognizing the specific facial expression in faces above the detection threshold. We conducted two experiments with young adult females varying in reported temperamental shyness and sociability - the former trait is known to influence the recognition of facial expressions during childhood. In both experiments, the measures had good split half reliability. Because shyness was significantly negatively correlated with sociability, we used partial correlations to examine the relation of each to sensitivity to facial expression. Sociability was negatively related to threshold to detect fear (Experiment 1 and to misidentify fear as another expression or happy expressions as fear (Experiment 2. Both patterns are consistent with hypervigilance by less sociable individuals. Shyness was positively related to misidentification of fear as another emotion (Experiment 2, a pattern consistent with a history of avoidance. We discuss the advantages and limitations of this new approach for studying individual differences in sensitivity to facial expression.

  14. Facial Expression Recognition with Fusion Features Extracted from Salient Facial Areas.

    Science.gov (United States)

    Liu, Yanpeng; Li, Yibin; Ma, Xin; Song, Rui

    2017-03-29

    In the pattern recognition domain, deep architectures are currently widely used and they have achieved fine results. However, these deep architectures make particular demands, especially in terms of their requirement for big datasets and GPU. Aiming to gain better results without deep networks, we propose a simplified algorithm framework using fusion features extracted from the salient areas of faces. Furthermore, the proposed algorithm has achieved a better result than some deep architectures. For extracting more effective features, this paper firstly defines the salient areas on the faces. This paper normalizes the salient areas of the same location in the faces to the same size; therefore, it can extracts more similar features from different subjects. LBP and HOG features are extracted from the salient areas, fusion features' dimensions are reduced by Principal Component Analysis (PCA) and we apply several classifiers to classify the six basic expressions at once. This paper proposes a salient areas definitude method which uses peak expressions frames compared with neutral faces. This paper also proposes and applies the idea of normalizing the salient areas to align the specific areas which express the different expressions. As a result, the salient areas found from different subjects are the same size. In addition, the gamma correction method is firstly applied on LBP features in our algorithm framework which improves our recognition rates significantly. By applying this algorithm framework, our research has gained state-of-the-art performances on CK+ database and JAFFE database.

  15. Facial Expression Recognition with Fusion Features Extracted from Salient Facial Areas

    Science.gov (United States)

    Liu, Yanpeng; Li, Yibin; Ma, Xin; Song, Rui

    2017-01-01

    In the pattern recognition domain, deep architectures are currently widely used and they have achieved fine results. However, these deep architectures make particular demands, especially in terms of their requirement for big datasets and GPU. Aiming to gain better results without deep networks, we propose a simplified algorithm framework using fusion features extracted from the salient areas of faces. Furthermore, the proposed algorithm has achieved a better result than some deep architectures. For extracting more effective features, this paper firstly defines the salient areas on the faces. This paper normalizes the salient areas of the same location in the faces to the same size; therefore, it can extracts more similar features from different subjects. LBP and HOG features are extracted from the salient areas, fusion features’ dimensions are reduced by Principal Component Analysis (PCA) and we apply several classifiers to classify the six basic expressions at once. This paper proposes a salient areas definitude method which uses peak expressions frames compared with neutral faces. This paper also proposes and applies the idea of normalizing the salient areas to align the specific areas which express the different expressions. As a result, the salient areas found from different subjects are the same size. In addition, the gamma correction method is firstly applied on LBP features in our algorithm framework which improves our recognition rates significantly. By applying this algorithm framework, our research has gained state-of-the-art performances on CK+ database and JAFFE database. PMID:28353671

  16. Probing the psychophysiology of the airways: physical activity, experienced emotion, and facially expressed emotion.

    Science.gov (United States)

    Ritz, Thomas

    2004-11-01

    This article reviews research on airway reactivity in health and asthma within a psychophysiological context, including the effects of physical activity, emotion induction, and manipulation of facial expression of emotion. Skeletal muscle activation leads to airway dilation, with vagal withdrawal being the most likely mechanism. Emotional arousal, in particular negative affect, leads to airway constriction, with evidence for a vagal pathway in depressive states and ventilatory contributions in positive affect. Laboratory-induced airway responses covary with reports of emotion-induced asthma and with lung function decline during negative mood in the field. Like physical activity, facial expression of emotion leads to airway dilation. However, these effects are small and less consistent in posed emotional expressions. The mechanisms of emotion-induced airway responses and potential benefits of emotional expression in asthma deserve further study.

  17. Intranasal oxytocin increases facial expressivity, but not ratings of trustworthiness, in patients with schizophrenia and healthy controls.

    Science.gov (United States)

    Woolley, J D; Chuang, B; Fussell, C; Scherer, S; Biagianti, B; Fulford, D; Mathalon, D H; Vinogradov, S

    2017-05-01

    Blunted facial affect is a common negative symptom of schizophrenia. Additionally, assessing the trustworthiness of faces is a social cognitive ability that is impaired in schizophrenia. Currently available pharmacological agents are ineffective at improving either of these symptoms, despite their clinical significance. The hypothalamic neuropeptide oxytocin has multiple prosocial effects when administered intranasally to healthy individuals and shows promise in decreasing negative symptoms and enhancing social cognition in schizophrenia. Although two small studies have investigated oxytocin's effects on ratings of facial trustworthiness in schizophrenia, its effects on facial expressivity have not been investigated in any population. We investigated the effects of oxytocin on facial emotional expressivity while participants performed a facial trustworthiness rating task in 33 individuals with schizophrenia and 35 age-matched healthy controls using a double-blind, placebo-controlled, cross-over design. Participants rated the trustworthiness of presented faces interspersed with emotionally evocative photographs while being video-recorded. Participants' facial expressivity in these videos was quantified by blind raters using a well-validated manualized approach (i.e. the Facial Expression Coding System; FACES). While oxytocin administration did not affect ratings of facial trustworthiness, it significantly increased facial expressivity in individuals with schizophrenia (Z = -2.33, p = 0.02) and at trend level in healthy controls (Z = -1.87, p = 0.06). These results demonstrate that oxytocin administration can increase facial expressivity in response to emotional stimuli and suggest that oxytocin may have the potential to serve as a treatment for blunted facial affect in schizophrenia.

  18. Effects of Emotional Facial Expression on Time Perception in Patients with Parkinson's Disease.

    Science.gov (United States)

    Mioni, Giovanna; Meligrana, Lucia; Grondin, Simon; Perini, Francesco; Bartolomei, Luigi; Stablum, Franca

    2016-10-01

    Previous studies have demonstrated that emotional facial expressions alter temporal judgments. Moreover, while some studies conducted with Parkinson's disease (PD) patients suggest dysfunction in the recognition of emotional facial expression, others have shown a dysfunction in time perception. In the present study, we investigate the magnitude of temporal distortions caused by the presentation of emotional facial expressions (anger, shame, and neutral) in PD patients and controls. Twenty-five older adults with PD and 17 healthy older adults took part in the present study. PD patients were divided into two sub-groups, with and without mild cognitive impairment (MCI), based on their neuropsychological performance. Participants were tested with a time bisection task with standard intervals lasting 400 ms and 1600 ms. The effect of facial emotional stimuli on time perception was evident in all participants, yet the effect was greater for PD-MCI patients. Furthermore, PD-MCI patients were more likely to underestimate long and overestimate short temporal intervals than PD-non-MCI patients and controls. Temporal impairment in PD-MCI patients seem to be mainly caused by a memory dysfunction. (JINS, 2016, 22, 890-899).

  19. Facial expressions, their communicatory functions and neuro-cognitive substrates.

    OpenAIRE

    Blair, R J R

    2003-01-01

    Human emotional expressions serve a crucial communicatory role allowing the rapid transmission of valence information from one individual to another. This paper will review the literature on the neural mechanisms necessary for this communication: both the mechanisms involved in the production of emotional expressions and those involved in the interpretation of the emotional expressions of others. Finally, reference to the neuro-psychiatric disorders of autism, psychopathy and acquired sociopa...

  20. [The role of experience in the neurology of facial expression of emotions].

    Science.gov (United States)

    Gordillo, Fernando; Pérez, Miguel A; Arana, José M; Mestas, Lilia; López, Rafael M

    2015-04-01

    Facial expression of emotion has an important social function that facilitates interaction between people. This process has a neurological basis, which is not isolated from the context, or the experience of the interaction between people in that context. Yet, to date, the impact that experience has on the perception of emotions is not completely understood. To discuss the role of experience in the recognition of facial expression of emotions and to analyze the biases towards emotional perception. The maturation of the structures that support the ability to recognize emotion goes through a sensitive period during adolescence, where experience may have greater impact on emotional recognition. Experiences of abuse, neglect, war, and stress generate a bias towards expressions of anger and sadness. Similarly, positive experiences generate a bias towards the expression of happiness. Only when people are able to use the facial expression of emotions as a channel for understanding an expression, will they be able to interact appropriately with their environment. This environment, in turn, will lead to experiences that modulate this capacity. Therefore, it is a self-regulatory process that can be directed through the implementation of intervention programs on emotional aspects.

  1. Gene expression of NMDA and AMPA receptors in different facial motor neurons.

    Science.gov (United States)

    Chen, Pei; Song, Jun; Luo, Linghui; Cheng, Qing; Xiao, Hongjun; Gong, Shusheng

    2016-01-01

    Facial motor neurons (FMNs) are involved in the remodeling of the facial nucleus in response to peripheral injury. This study aimed to examine the gene expression of alpha-amino-3-hydroxy-5-methylisoxazole-4-propionic acid receptor (AMPAR) and N-methyl-D-aspartate subtype of ionotropic glutamate receptor (NMDAR) in reinnervating dormant FMNs after facial nerve axotomy. Animal study. Rat models of facial-facial anastomosis were set up and raised until the 90th day. By laser capture microdissection (LCM), the reinnervating neurons labeled by Fluoro-Ruby (FR) were first captured, and the remaining (dormant) neurons identified by Nissl staining were captured in the facial nucleus of the operated side. The total RNA of two types of neurons were extracted, and the gene expressions of AMPAR and NMDAR were studied by real-time quantitative reverse-transcriptase polymerase chain reaction (qRT-PCR). Messenger RNA (mRNA) of AMPAR subunits (GluR1, GluR2, GluR3, and GluR4) and NMDAR subunits (NR1, NR2a, NR2b, NR2c, and NR2d) was detected in reinnervating and dormant neurons. The relative ratios exhibited that the expressions of GluR1, GluR4, NR2a, NR2b, NR2c, and NR2d mRNA were lower, whereas the expressions of GluR2, GluR3, and NR1 mRNA were higher in dormant FMNs than in reinnervating counterparts. LCM in combination with real-time qRT-PCR can be employed for the examination of gene expression of different FMNs in a heterogeneous nucleus. The adaptive changes in AMPAR and NMDAR subunit mRNA might dictate the regenerative fate of FMNs in response to the peripheral axotomy and thereby play a unique role in the pathogenesis of facial nerve injury and regeneration. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  2. Perceptual, Categorical, and Affective Processing of Ambiguous Smiling Facial Expressions

    Science.gov (United States)

    Calvo, Manuel G.; Fernandez-Martin, Andres; Nummenmaa, Lauri

    2012-01-01

    Why is a face with a smile but non-happy eyes likely to be interpreted as happy? We used blended expressions in which a smiling mouth was incongruent with the eyes (e.g., angry eyes), as well as genuine expressions with congruent eyes and mouth (e.g., both happy or angry). Tasks involved detection of a smiling mouth (perceptual), categorization of…

  3. Comparison of facial expression in patients with obsessive-compulsive disorder and schizophrenia using the Facial Action Coding System: a preliminary study

    Directory of Open Access Journals (Sweden)

    Bersani G

    2012-12-01

    Full Text Available Giuseppe Bersani,1 Francesco Saverio Bersani,1,2 Giuseppe Valeriani,1 Maddalena Robiony,1 Annalisa Anastasia,1 Chiara Colletti,1,3 Damien Liberati,1 Enrico Capra,2 Adele Quartini,1 Elisa Polli11Department of Medical-Surgical Sciences and Biotechnologies, 2Department of Neurology and Psychiatry, Sapienza University of Rome, Rome, 3Department of Neuroscience and Behaviour, Section of Psychiatry, Federico II University of Naples, Naples, ItalyBackground: Research shows that impairment in the expression and recognition of emotion exists in multiple psychiatric disorders. The objective of the current study was to evaluate the way that patients with schizophrenia and those with obsessive-compulsive disorder experience and display emotions in relation to specific emotional stimuli using the Facial Action Coding System (FACS.Methods: Thirty individuals participated in the study, comprising 10 patients with schizophrenia, 10 with obsessive-compulsive disorder, and 10 healthy controls. All participants underwent clinical sessions to evaluate their symptoms and watched emotion-eliciting video clips while facial activity was videotaped. Congruent/incongruent feeling of emotions and facial expression in reaction to emotions were evaluated.Results: Patients with schizophrenia and obsessive-compulsive disorder presented similarly incongruent emotive feelings and facial expressions (significantly worse than healthy participants. Correlations between the severity of psychopathological condition (in particular the severity of affective flattening and impairment in recognition and expression of emotions were found.Discussion: Patients with obsessive-compulsive disorder and schizophrenia seem to present a similarly relevant impairment in both experiencing and displaying of emotions; this impairment may be seen as a chronic consequence of the same neurodevelopmental origin of the two diseases. Mimic expression could be seen as a behavioral indicator of affective

  4. Event-related potentials to task-irrelevant changes in facial expressions

    Directory of Open Access Journals (Sweden)

    Astikainen Piia

    2009-07-01

    Full Text Available Abstract Background Numerous previous experiments have used oddball paradigm to study change detection. This paradigm is applied here to study change detection of facial expressions in a context which demands abstraction of the emotional expression-related facial features among other changing facial features. Methods Event-related potentials (ERPs were recorded in adult humans engaged in a demanding auditory task. In an oddball paradigm, repeated pictures of faces with a neutral expression ('standard', p = .9 were rarely replaced by pictures with a fearful ('fearful deviant', p = .05 or happy ('happy deviant', p = .05 expression. Importantly, facial identities changed from picture to picture. Thus, change detection required abstraction of facial expression from changes in several low-level visual features. Results ERPs to both types of deviants differed from those to standards. At occipital electrode sites, ERPs to deviants were more negative than ERPs to standards at 150–180 ms and 280–320 ms post-stimulus. A positive shift to deviants at fronto-central electrode sites in the analysis window of 130–170 ms post-stimulus was also found. Waveform analysis computed as point-wise comparisons between the amplitudes elicited by standards and deviants revealed that the occipital negativity emerged earlier to happy deviants than to fearful deviants (after 140 ms versus 160 ms post-stimulus, respectively. In turn, the anterior positivity was earlier to fearful deviants than to happy deviants (110 ms versus 120 ms post-stimulus, respectively. Conclusion ERP amplitude differences between emotional and neutral expressions indicated pre-attentive change detection of facial expressions among neutral faces. The posterior negative difference at 150–180 ms latency resembled visual mismatch negativity (vMMN – an index of pre-attentive change detection previously studied only to changes in low-level features in vision. The positive anterior difference in

  5. Coding and quantification of a facial expression for pain in lambs.

    Science.gov (United States)

    Guesgen, M J; Beausoleil, N J; Leach, M; Minot, E O; Stewart, M; Stafford, K J

    2016-11-01

    Facial expressions are routinely used to assess pain in humans, particularly those who are non-verbal. Recently, there has been an interest in developing coding systems for facial grimacing in non-human animals, such as rodents, rabbits, horses and sheep. The aims of this preliminary study were to: 1. Qualitatively identify facial feature changes in lambs experiencing pain as a result of tail-docking and compile these changes to create a Lamb Grimace Scale (LGS); 2. Determine whether human observers can use the LGS to differentiate tail-docked lambs from control lambs and differentiate lambs before and after docking; 3. Determine whether changes in facial action units of the LGS can be objectively quantified in lambs before and after docking; 4. Evaluate effects of restraint of lambs on observers' perceptions of pain using the LGS and on quantitative measures of facial action units. By comparing images of lambs before (no pain) and after (pain) tail-docking, the LGS was devised in consultation with scientists experienced in assessing facial expression in other species. The LGS consists of five facial action units: Orbital Tightening, Mouth Features, Nose Features, Cheek Flattening and Ear Posture. The aims of the study were addressed in two experiments. In Experiment I, still images of the faces of restrained lambs were taken from video footage before and after tail-docking (n=4) or sham tail-docking (n=3). These images were scored by a group of five naïve human observers using the LGS. Because lambs were restrained for the duration of the experiment, Ear Posture was not scored. The scores for the images were averaged to provide one value per feature per period and then scores for the four LGS action units were averaged to give one LGS score per lamb per period. In Experiment II, still images of the faces nine lambs were taken before and after tail-docking. Stills were taken when lambs were restrained and unrestrained in each period. A different group of five

  6. A facial expression image database and norm for Asian population: a preliminary report

    Science.gov (United States)

    Chen, Chien-Chung; Cho, Shu-ling; Horszowska, Katarzyna; Chen, Mei-Yen; Wu, Chia-Ching; Chen, Hsueh-Chih; Yeh, Yi-Yu; Cheng, Chao-Min

    2009-01-01

    We collected 6604 images of 30 models in eight types of facial expression: happiness, anger, sadness, disgust, fear, surprise, contempt and neutral. Among them, 406 most representative images from 12 models were rated by more than 200 human raters for perceived emotion category and intensity. Such large number of emotion categories, models and raters is sufficient for most serious expression recognition research both in psychology and in computer science. All the models and raters are of Asian background. Hence, this database can also be used when the culture background is a concern. In addition, 43 landmarks each of the 291 rated frontal view images were identified and recorded. This information should facilitate feature based research of facial expression. Overall, the diversity in images and richness in information should make our database and norm useful for a wide range of research.

  7. Alexithymia, not autism, predicts poor recognition of emotional facial expressions.

    Science.gov (United States)

    Cook, Richard; Brewer, Rebecca; Shah, Punit; Bird, Geoffrey

    2013-05-01

    Despite considerable research into whether face perception is impaired in autistic individuals, clear answers have proved elusive. In the present study, we sought to determine whether co-occurring alexithymia (characterized by difficulties interpreting emotional states) may be responsible for face-perception deficits previously attributed to autism. Two experiments were conducted using psychophysical procedures to determine the relative contributions of alexithymia and autism to identity and expression recognition. Experiment 1 showed that alexithymia correlates strongly with the precision of expression attributions, whereas autism severity was unrelated to expression-recognition ability. Experiment 2 confirmed that alexithymia is not associated with impaired ability to detect expression variation; instead, results suggested that alexithymia is associated with difficulties interpreting intact sensory descriptions. Neither alexithymia nor autism was associated with biased or imprecise identity attributions. These findings accord with the hypothesis that the emotional symptoms of autism are in fact due to co-occurring alexithymia and that existing diagnostic criteria may need to be revised.

  8. Effects of task demands on the early neural processing of fearful and happy facial expressions.

    Science.gov (United States)

    Itier, Roxane J; Neath-Tavares, Karly N

    2017-05-15

    Task demands shape how we process environmental stimuli but their impact on the early neural processing of facial expressions remains unclear. In a within-subject design, ERPs were recorded to the same fearful, happy and neutral facial expressions presented during a gender discrimination, an explicit emotion discrimination and an oddball detection tasks, the most studied tasks in the field. Using an eye tracker, fixation on the face nose was enforced using a gaze-contingent presentation. Task demands modulated amplitudes from 200 to 350ms at occipito-temporal sites spanning the EPN component. Amplitudes were more negative for fearful than neutral expressions starting on N170 from 150 to 350ms, with a temporo-occipital distribution, whereas no clear effect of happy expressions was seen. Task and emotion effects never interacted in any time window or for the ERP components analyzed (P1, N170, EPN). Thus, whether emotion is explicitly discriminated or irrelevant for the task at hand, neural correlates of fearful and happy facial expressions seem immune to these task demands during the first 350ms of visual processing. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Seeing Mixed Emotions: The Specificity of Emotion Perception From Static and Dynamic Facial Expressions Across Cultures

    Science.gov (United States)

    Fang, Xia; Sauter, Disa A.; Van Kleef, Gerben A.

    2017-01-01

    Although perceivers often agree about the primary emotion that is conveyed by a particular expression, observers may concurrently perceive several additional emotions from a given facial expression. In the present research, we compared the perception of two types of nonintended emotions in Chinese and Dutch observers viewing facial expressions: emotions which were morphologically similar to the intended emotion and emotions which were morphologically dissimilar to the intended emotion. Findings were consistent across two studies and showed that (a) morphologically similar emotions were endorsed to a greater extent than dissimilar emotions and (b) Chinese observers endorsed nonintended emotions more than did Dutch observers. Furthermore, the difference between Chinese and Dutch observers was more pronounced for the endorsement of morphologically similar emotions than of dissimilar emotions. We also obtained consistent evidence that Dutch observers endorsed nonintended emotions that were congruent with the preceding expressions to a greater degree. These findings suggest that culture and morphological similarity both influence the extent to which perceivers see several emotions in a facial expression. PMID:29386689

  10. 3D facial expression recognition using maximum relevance minimum redundancy geometrical features

    Science.gov (United States)

    Rabiu, Habibu; Saripan, M. Iqbal; Mashohor, Syamsiah; Marhaban, Mohd Hamiruce

    2012-12-01

    In recent years, facial expression recognition (FER) has become an attractive research area, which besides the fundamental challenges, it poses, finds application in areas, such as human-computer interaction, clinical psychology, lie detection, pain assessment, and neurology. Generally the approaches to FER consist of three main steps: face detection, feature extraction and expression recognition. The recognition accuracy of FER hinges immensely on the relevance of the selected features in representing the target expressions. In this article, we present a person and gender independent 3D facial expression recognition method, using maximum relevance minimum redundancy geometrical features. The aim is to detect a compact set of features that sufficiently represents the most discriminative features between the target classes. Multi-class one-against-one SVM classifier was employed to recognize the seven facial expressions; neutral, happy, sad, angry, fear, disgust, and surprise. The average recognition accuracy of 92.2% was recorded. Furthermore, inter database homogeneity was investigated between two independent databases the BU-3DFE and UPM-3DFE the results showed a strong homogeneity between the two databases.

  11. Seeing Mixed Emotions: The Specificity of Emotion Perception From Static and Dynamic Facial Expressions Across Cultures.

    Science.gov (United States)

    Fang, Xia; Sauter, Disa A; Van Kleef, Gerben A

    2018-01-01

    Although perceivers often agree about the primary emotion that is conveyed by a particular expression, observers may concurrently perceive several additional emotions from a given facial expression. In the present research, we compared the perception of two types of nonintended emotions in Chinese and Dutch observers viewing facial expressions: emotions which were morphologically similar to the intended emotion and emotions which were morphologically dissimilar to the intended emotion. Findings were consistent across two studies and showed that (a) morphologically similar emotions were endorsed to a greater extent than dissimilar emotions and (b) Chinese observers endorsed nonintended emotions more than did Dutch observers. Furthermore, the difference between Chinese and Dutch observers was more pronounced for the endorsement of morphologically similar emotions than of dissimilar emotions. We also obtained consistent evidence that Dutch observers endorsed nonintended emotions that were congruent with the preceding expressions to a greater degree. These findings suggest that culture and morphological similarity both influence the extent to which perceivers see several emotions in a facial expression.

  12. “Facing” leaders : facial expression and leadership perception.

    OpenAIRE

    Trichas, S.; Schyns, B.; Lord, R.G.; Hall, R.J.

    2017-01-01

    This experimental study investigated the effect of a leader's expression of happy versus nervous emotions on subsequent perceptions of leadership and ratings of traits associated with implicit leadership theories (ILTs). Being fast and universally understood, emotions are ideal stimuli for investigating the dynamic effects of ILTs, which were understood in this study in terms of the constraints that expressed emotions impose on the connectionist networks that activate ILTs. The experimental d...

  13. Identification and intensity of disgust: Distinguishing visual, linguistic and facial expressions processing in Parkinson disease.

    Science.gov (United States)

    Sedda, Anna; Petito, Sara; Guarino, Maria; Stracciari, Andrea

    2017-07-14

    Most of the studies since now show an impairment for facial displays of disgust recognition in Parkinson disease. A general impairment in disgust processing in patients with Parkinson disease might adversely affect their social interactions, given the relevance of this emotion for human relations. However, despite the importance of faces, disgust is also expressed through other format of visual stimuli such as sentences and visual images. The aim of our study was to explore disgust processing in a sample of patients affected by Parkinson disease, by means of various tests tackling not only facial recognition but also other format of visual stimuli through which disgust can be recognized. Our results confirm that patients are impaired in recognizing facial displays of disgust. Further analyses show that patients are also impaired and slower for other facial expressions, with the only exception of happiness. Notably however, patients with Parkinson disease processed visual images and sentences as controls. Our findings show a dissociation within different formats of visual stimuli of disgust, suggesting that Parkinson disease is not characterized by a general compromising of disgust processing, as often suggested. The involvement of the basal ganglia-frontal cortex system might spare some cognitive components of emotional processing, related to memory and culture, at least for disgust. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Impaired Facial Expression Recognition in Children with Temporal Lobe Epilepsy: Impact of Early Seizure Onset on Fear Recognition

    Science.gov (United States)

    Golouboff, Nathalie; Fiori, Nicole; Delalande, Olivier; Fohlen, Martine; Dellatolas, Georges; Jambaque, Isabelle

    2008-01-01

    The amygdala has been implicated in the recognition of facial emotions, especially fearful expressions, in adults with early-onset right temporal lobe epilepsy (TLE). The present study investigates the recognition of facial emotions in children and adolescents, 8-16 years old, with epilepsy. Twenty-nine subjects had TLE (13 right, 16 left) and…

  15. Emotional facial expressions and the attentional blink : Attenuated blink for angry and happy faces irrespective of social anxiety

    NARCIS (Netherlands)

    De Jong, P.J.; Koster, E.H.W.; van Wees-Cieraad, Rineke; Martens, Sander

    2009-01-01

    Although facial information is distributed over spatial as well as temporal domains, thus far research on selective attention to disapproving faces has concentrated predominantly on the spatial domain. This study examined the temporal characteristics of visual attention towards facial expressions by

  16. Joint recognition-expression impairment of facial emotions in Huntington's disease despite intact understanding of feelings.

    Science.gov (United States)

    Trinkler, Iris; Cleret de Langavant, Laurent; Bachoud-Lévi, Anne-Catherine

    2013-02-01

    Patients with Huntington's disease (HD), a neurodegenerative disorder that causes major motor impairments, also show cognitive and emotional deficits. While their deficit in recognising emotions has been explored in depth, little is known about their ability to express emotions and understand their feelings. If these faculties were impaired, patients might not only mis-read emotion expressions in others but their own emotions might be mis-interpreted by others as well, or thirdly, they might have difficulties understanding and describing their feelings. We compared the performance of recognition and expression of facial emotions in 13 HD patients with mild motor impairments but without significant bucco-facial abnormalities, and 13 controls matched for age and education. Emotion recognition was investigated in a forced-choice recognition test (FCR), and emotion expression by filming participants while they mimed the six basic emotional facial expressions (anger, disgust, fear, surprise, sadness and joy) to the experimenter. The films were then segmented into 60 stimuli per participant and four external raters performed a FCR on this material. Further, we tested understanding of feelings in self (alexithymia) and others (empathy) using questionnaires. Both recognition and expression were impaired across different emotions in HD compared to controls and recognition and expression scores were correlated. By contrast, alexithymia and empathy scores were very similar in HD and controls. This might suggest that emotion deficits in HD might be tied to the expression itself. Because similar emotion recognition-expression deficits are also found in Parkinson's Disease and vascular lesions of the striatum, our results further confirm the importance of the striatum for emotion recognition and expression, while access to the meaning of feelings relies on a different brain network, and is spared in HD. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Inter-hemispherical functional coupling of EEG rhythms during the perception of facial emotional expressions.

    Science.gov (United States)

    Vecchio, Fabrizio; Babiloni, Claudio; Buffo, Paola; Rossini, Paolo Maria; Bertini, Mario

    2013-02-01

    Brain rhythms of both hemispheres are involved in the processing of emotional stimuli but their interdependence between the two hemispheres is poorly known. Here we tested the hypothesis that passive visual perception of facial emotional expressions is related to a coordination of the two hemispheres as revealed by the inter-hemispherical functional coupling of brain electroencephalographic (EEG) rhythms. To this aim, EEG data were recorded in 14 subjects observing emotional faces with neutral, happy or sad facial expressions (about 33% for each class). The EEG data were analyzed by directed transfer function (DTF), which estimates directional functional coupling of EEG rhythms. The EEG rhythms of interest were theta (about 4-6 Hz), alpha 1 (about 6-8 Hz), alpha 2 (about 8-10 Hz), alpha 3 (about 10-12 Hz), beta 1 (13-20 Hz), beta 2 (21-30 Hz), and gamma (31-44 Hz). In the frontal regions, inter-hemispherical DTF values were bidirectionally higher in amplitude across all frequency bands, during the perception of faces with sad compared to neutral or happy expressions. These results suggest that the processing of emotional negative facial expressions is related to an enhancement of a reciprocal inter-hemispherical flux of information in frontal cortex, possibly optimizing executive functions and motor control. Dichotomical view of hemispherical functional specializations does not take into account remarkable reciprocal interactions between frontal areas of the two hemispheres during the processing of negative facial expressions. Copyright © 2012 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  18. Information processing of motion in facial expression and the geometry of dynamical systems

    Science.gov (United States)

    Assadi, Amir H.; Eghbalnia, Hamid; McMenamin, Brenton W.

    2005-01-01

    An interesting problem in analysis of video data concerns design of algorithms that detect perceptually significant features in an unsupervised manner, for instance methods of machine learning for automatic classification of human expression. A geometric formulation of this genre of problems could be modeled with help of perceptual psychology. In this article, we outline one approach for a special case where video segments are to be classified according to expression of emotion or other similar facial motions. The encoding of realistic facial motions that convey expression of emotions for a particular person P forms a parameter space XP whose study reveals the "objective geometry" for the problem of unsupervised feature detection from video. The geometric features and discrete representation of the space XP are independent of subjective evaluations by observers. While the "subjective geometry" of XP varies from observer to observer, levels of sensitivity and variation in perception of facial expressions appear to share a certain level of universality among members of similar cultures. Therefore, statistical geometry of invariants of XP for a sample of population could provide effective algorithms for extraction of such features. In cases where frequency of events is sufficiently large in the sample data, a suitable framework could be provided to facilitate the information-theoretic organization and study of statistical invariants of such features. This article provides a general approach to encode motion in terms of a particular genre of dynamical systems and the geometry of their flow. An example is provided to illustrate the general theory.

  19. Gaze cuing of attention in snake phobic women: the influence of facial expression

    Directory of Open Access Journals (Sweden)

    Carolina ePletti

    2015-04-01

    Full Text Available Only a few studies investigated whether animal phobics exhibit attentional biases in contexts where no phobic stimuli are present. Among these, recent studies provided evidence for a bias toward facial expressions of fear and disgust in animal phobics. Such findings may be due to the fact that these expressions could signal the presence of a phobic object in the surroundings. To test this hypothesis and further investigate attentional biases for emotional faces in animal phobics, we conducted an experiment using a gaze-cuing paradigm in which participants’ attention was driven by the task-irrelevant gaze of a centrally presented face. We employed dynamic negative facial expressions of disgust, fear and anger and found an enhanced gaze-cuing effect in snake phobics as compared to controls, irrespective of facial expression. These results provide evidence of a general hypervigilance in animal phobics in the absence of phobic stimuli, and indicate that research on specific phobias should not be limited to symptom provocation paradigms.

  20. Expression of DPP6 in Meckel's cartilage and tooth germs during mouse facial development.

    Science.gov (United States)

    Du, J; Fan, Z; Ma, X; Wu, Y; Liu, S; Gao, Y; Shen, Y; Fan, M; Wang, S

    2014-01-01

    Dipeptidyl peptidase-like protein 6 (DPP6), a member of the dipeptidyl aminopeptidase family, plays distinct roles in brain development, but its expression in embryonic Meckel's cartilage and tooth germs development is unknown. We analyzed the expression pattern of DPP6 in Meckel's cartilage and tooth germs development using in situ hybridization. DPP6 was detected in different patterns in Meckel's cartilage and tooth germs during mouse facial development from 11.5 to 13.5 days post-coitus (dpc) embryos. The expression pattern of DPP6 suggests that it may be involved in mandible and tooth development.

  1. Feature Fusion Algorithm for Multimodal Emotion Recognition from Speech and Facial Expression Signal

    Directory of Open Access Journals (Sweden)

    Han Zhiyan

    2016-01-01

    Full Text Available In order to overcome the limitation of single mode emotion recognition. This paper describes a novel multimodal emotion recognition algorithm, and takes speech signal and facial expression signal as the research subjects. First, fuse the speech signal feature and facial expression signal feature, get sample sets by putting back sampling, and then get classifiers by BP neural network (BPNN. Second, measure the difference between two classifiers by double error difference selection strategy. Finally, get the final recognition result by the majority voting rule. Experiments show the method improves the accuracy of emotion recognition by giving full play to the advantages of decision level fusion and feature level fusion, and makes the whole fusion process close to human emotion recognition more, with a recognition rate 90.4%.

  2. Towards Emotion Detection in Educational Scenarios from Facial Expressions and Body Movements through Multimodal Approaches

    Directory of Open Access Journals (Sweden)

    Mar Saneiro

    2014-01-01

    Full Text Available We report current findings when considering video recordings of facial expressions and body movements to provide affective personalized support in an educational context from an enriched multimodal emotion detection approach. In particular, we describe an annotation methodology to tag facial expression and body movements that conform to changes in the affective states of learners while dealing with cognitive tasks in a learning process. The ultimate goal is to combine these annotations with additional affective information collected during experimental learning sessions from different sources such as qualitative, self-reported, physiological, and behavioral information. These data altogether are to train data mining algorithms that serve to automatically identify changes in the learners’ affective states when dealing with cognitive tasks which help to provide emotional personalized support.

  3. Perceptions of Emotion from Facial Expressions are Not Culturally Universal: Evidence from a Remote Culture

    Science.gov (United States)

    Gendron, Maria; Roberson, Debi; van der Vyver, Jacoba Marietta; Barrett, Lisa Feldman

    2014-01-01

    It is widely believed that certain emotions are universally recognized in facial expressions. Recent evidence indicates that Western perceptions (e.g., scowls as anger) depend on cues to US emotion concepts embedded in experiments. Since such cues are standard feature in methods used in cross-cultural experiments, we hypothesized that evidence of universality depends on this conceptual context. In our study, participants from the US and the Himba ethnic group sorted images of posed facial expressions into piles by emotion type. Without cues to emotion concepts, Himba participants did not show the presumed “universal” pattern, whereas US participants produced a pattern with presumed universal features. With cues to emotion concepts, participants in both cultures produced sorts that were closer to the presumed “universal” pattern, although substantial cultural variation persisted. Our findings indicate that perceptions of emotion are not universal, but depend on cultural and conceptual contexts. PMID:24708506

  4. Memory for facial expression is influenced by the background music playing during study.

    Science.gov (United States)

    Woloszyn, Michael R; Ewert, Laura

    2012-01-01

    The effect of the emotional quality of study-phase background music on subsequent recall for happy and sad facial expressions was investigated. Undergraduates (N = 48) viewed a series of line drawings depicting a happy or sad child in a variety of environments that were each accompanied by happy or sad music. Although memory for faces was very accurate, emotionally incongruent background music biased subsequent memory for facial expressions, increasing the likelihood that happy faces were recalled as sad when sad music was previously heard, and that sad faces were recalled as happy when happy music was previously heard. Overall, the results indicated that when recalling a scene, the emotional tone is set by an integration of stimulus features from several modalities.

  5. Pose and Expression Independent Facial Landmark Localization Using Dense-SURF and the Hausdorff Distance.

    Science.gov (United States)

    Sangineto, Enver

    2013-03-01

    We present an approach to automatic localization of facial feature points which deals with pose, expression, and identity variations combining 3D shape models with local image patch classification. The latter is performed by means of densely extracted SURF-like features, which we call DU-SURF, while the former is based on a multiclass version of the Hausdorff distance to address local classification errors and nonvisible points. The final system is able to localize facial points in real-world scenarios, dealing with out of plane head rotations, expression changes, and different lighting conditions. Extensive experimentation with the proposed method has been carried out showing the superiority of our approach with respect to other state-of-the-art systems. Finally, DU-SURF features have been compared with other modern features and we experimentally demonstrate their competitive classification accuracy and computational efficiency.

  6. Test battery for measuring the perception and recognition of facial expressions of emotion

    Science.gov (United States)

    Wilhelm, Oliver; Hildebrandt, Andrea; Manske, Karsten; Schacht, Annekathrin; Sommer, Werner

    2014-01-01

    Despite the importance of perceiving and recognizing facial expressions in everyday life, there is no comprehensive test battery for the multivariate assessment of these abilities. As a first step toward such a compilation, we present 16 tasks that measure the perception and recognition of facial emotion expressions, and data illustrating each task's difficulty and reliability. The scoring of these tasks focuses on either the speed or accuracy of performance. A sample of 269 healthy young adults completed all tasks. In general, accuracy and reaction time measures for emotion-general scores showed acceptable and high estimates of internal consistency and factor reliability. Emotion-specific scores yielded lower reliabilities, yet high enough to encourage further studies with such measures. Analyses of task difficulty revealed that all tasks are suitable for measuring emotion perception and emotion recognition related abilities in normal populations. PMID:24860528

  7. Perceptions of emotion from facial expressions are not culturally universal: evidence from a remote culture.

    Science.gov (United States)

    Gendron, Maria; Roberson, Debi; van der Vyver, Jacoba Marietta; Barrett, Lisa Feldman

    2014-04-01

    It is widely believed that certain emotions are universally recognized in facial expressions. Recent evidence indicates that Western perceptions (e.g., scowls as anger) depend on cues to U.S. emotion concepts embedded in experiments. Because such cues are standard features in methods used in cross-cultural experiments, we hypothesized that evidence of universality depends on this conceptual context. In our study, participants from the United States and the Himba ethnic group from the Keunene region of northwestern Namibia sorted images of posed facial expressions into piles by emotion type. Without cues to emotion concepts, Himba participants did not show the presumed "universal" pattern, whereas U.S. participants produced a pattern with presumed universal features. With cues to emotion concepts, participants in both cultures produced sorts that were closer to the presumed "universal" pattern, although substantial cultural variation persisted. Our findings indicate that perceptions of emotion are not universal, but depend on cultural and conceptual contexts.

  8. Genetic Variations in COMT and DRD2 Modulate Attentional Bias for Affective Facial Expressions

    Science.gov (United States)

    Gong, Pingyuan; Shen, Guomin; Li, She; Zhang, Guoping; Fang, Hongchao; Lei, Lin; Zhang, Peizhe; Zhang, Fuchang

    2013-01-01

    Studies have revealed that catechol-O-methyltransferase (COMT) and dopaminegic receptor2 (DRD2) modulate human attention bias for palatable food or tobacco. However, the existing evidence about the modulations of COMT and DRD2 on attentional bias for facial expressions was still limited. In the study, 650 college students were genotyped with regard to COMT Val158Met and DRD2 TaqI A polymorphisms, and the attentional bias for facial expressions was assessed using the spatial cueing task. The results indicated that COMT Val158Met underpinned the individual difference in attentional bias for negative emotional expressions (P = 0.03) and the Met carriers showed more engagement bias for negative expressions than the Val/Val homozygote. On the contrary, DRD2 TaqIA underpinned the individual difference in attentional bias for positive expressions (P = 0.003) and individuals with TT genotype showed much more engagement bias for positive expressions than the individuals with CC genotype. Moreover, the two genes exerted significant interactions on the engagements for negative and positive expressions (P = 0.046, P = 0.005). These findings suggest that the individual differences in the attentional bias for emotional expressions are partially underpinned by the genetic polymorphisms in COMT and DRD2. PMID:24312552

  9. The Odor Context Facilitates the Perception of Low-Intensity Facial Expressions of Emotion.

    Science.gov (United States)

    Leleu, Arnaud; Demily, Caroline; Franck, Nicolas; Durand, Karine; Schaal, Benoist; Baudouin, Jean-Yves

    2015-01-01

    It has been established that the recognition of facial expressions integrates contextual information. In this study, we aimed to clarify the influence of contextual odors. The participants were asked to match a target face varying in expression intensity with non-ambiguous expressive faces. Intensity variations in the target faces were designed by morphing expressive faces with neutral faces. In addition, the influence of verbal information was assessed by providing half the participants with the emotion names. Odor cues were manipulated by placing participants in a pleasant (strawberry), aversive (butyric acid), or no-odor control context. The results showed two main effects of the odor context. First, the minimum amount of visual information required to perceive an expression was lowered when the odor context was emotionally congruent: happiness was correctly perceived at lower intensities in the faces displayed in the pleasant odor context, and the same phenomenon occurred for disgust and anger in the aversive odor context. Second, the odor context influenced the false perception of expressions that were not used in target faces, with distinct patterns according to the presence of emotion names. When emotion names were provided, the aversive odor context decreased intrusions for disgust ambiguous faces but increased them for anger. When the emotion names were not provided, this effect did not occur and the pleasant odor context elicited an overall increase in intrusions for negative expressions. We conclude that olfaction plays a role in the way facial expressions are perceived in interaction with other contextual influences such as verbal information.

  10. How Do Typically Developing Deaf Children and Deaf Children with Autism Spectrum Disorder Use the Face When Comprehending Emotional Facial Expressions in British Sign Language?

    Science.gov (United States)

    Denmark, Tanya; Atkinson, Joanna; Campbell, Ruth; Swettenham, John

    2014-01-01

    Facial expressions in sign language carry a variety of communicative features. While emotion can modulate a spoken utterance through changes in intonation, duration and intensity, in sign language specific facial expressions presented concurrently with a manual sign perform this function. When deaf adult signers cannot see facial features, their…

  11. Comparative gene expression analysis of avian embryonic facial structures reveals new candidates for human craniofacial disorders.

    Science.gov (United States)

    Brugmann, S A; Powder, K E; Young, N M; Goodnough, L H; Hahn, S M; James, A W; Helms, J A; Lovett, M

    2010-03-01

    Mammals and birds have common embryological facial structures, and appear to employ the same molecular genetic developmental toolkit. We utilized natural variation found in bird beaks to investigate what genes drive vertebrate facial morphogenesis. We employed cross-species microarrays to describe the molecular genetic signatures, developmental signaling pathways and the spectrum of transcription factor (TF) gene expression changes that differ between cranial neural crest cells in the developing beaks of ducks, quails and chickens. Surprisingly, we observed that the neural crest cells established a species-specific TF gene expression profile that predates morphological differences between the species. A total of 232 genes were differentially expressed between the three species. Twenty-two of these genes, including Fgfr2, Jagged2, Msx2, Satb2 and Tgfb3, have been previously implicated in a variety of mammalian craniofacial defects. Seventy-two of the differentially expressed genes overlap with un-cloned loci for human craniofacial disorders, suggesting that our data will provide a valuable candidate gene resource for human craniofacial genetics. The most dramatic changes between species were in the Wnt signaling pathway, including a 20-fold up-regulation of Dkk2, Fzd1 and Wnt1 in the duck compared with the other two species. We functionally validated these changes by demonstrating that spatial domains of Wnt activity differ in avian beaks, and that Wnt signals regulate Bmp pathway activity and promote regional growth in facial prominences. This study is the first of its kind, extending on previous work in Darwin's finches and provides the first large-scale insights into cross-species facial morphogenesis.

  12. Let the Avatar Brighten Your Smile: Effects of Enhancing Facial Expressions in Virtual Environments

    OpenAIRE

    Oh, Soo Youn; Bailenson, Jeremy; Kr?mer, Nicole; Li, Benjamin

    2016-01-01

    Previous studies demonstrated the positive effects of smiling on interpersonal outcomes. The present research examined if enhancing one?s smile in a virtual environment could lead to a more positive communication experience. In the current study, participants? facial expressions were tracked and mapped on a digital avatar during a real-time dyadic conversation. The avatar?s smile was rendered such that it was either a slightly enhanced version or a veridical version of the participant?s actua...

  13. Alexithymia, not fibromyalgia, predicts the attribution of pain to anger-related facial expressions.

    Science.gov (United States)

    Tella, Marialaura Di; Enrici, Ivan; Castelli, Lorys; Colonna, Fabrizio; Fusaro, Enrico; Ghiggia, Ada; Romeo, Annunziata; Tesio, Valentina; Adenzato, Mauro

    2017-11-08

    Fibromyalgia (FM) is a syndrome characterized by chronic, widespread musculoskeletal pain, occurring predominantly in women. Previous studies have shown that patients with FM display a pattern of selective processing or cognitive bias which fosters the encoding of pain-related information. The present study tested the hypothesis of an increased attribution of pain to facial expressions of emotions (FEE), in patients with FM. As previous studies have shown that alexithymia influences the processing of facial expressions, independent of specific clinical conditions, we also investigated whether alexithymia, rather than FM per se, influenced attribution of pain to FEE. One hundred and twenty-three women (41 with FM, 82 healthy controls, HC) were enrolled in this cross-sectional case-control study. We adopted two pain-attribution tasks, the Emotional Pain Estimation and the Emotional Pain Ascription, both using a modified version of the Ekman 60 Faces Test. Psychological distress was assessed using the Hospital Anxiety and Depression Scale, and alexithymia was assessed using the Toronto Alexithymia Scale. Patients with FM did not report increased attribution of pain to FEE. Alexithymic individuals demonstrated no specific problem in the recognition of basic emotions, but attributed significantly more pain to angry facial expression. Our study involved a relatively small sample size. The use of self-reported instruments might have led to underestimation of the presence of frank alexithymia in individuals having borderline cut-off scores. Alexithymia, rather than FM per se, plays a key role in explaining the observed differences in pain attribution to anger-related facial expressions. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Automatic change detection to facial expressions in adolescents

    DEFF Research Database (Denmark)

    Liu, Tongran; Xiao, Tong; Jiannong, Shi

    2016-01-01

    recruited to complete an emotional oddball task featuring on happy and one fearful condition. The measurement of event-related potential was carried out via electroencephalography and electrooculography recording, to detect visual mismatch negativity (vMMN) with regard to the automatic detection of changes...... automatic processing on fearful faces than happy faces. The present study indicated that adolescent’s posses stronger automatic detection of changes in emotional expression relative to adults, and sheds light on the neurodevelopment of automatic processes concerning social-emotional information....

  15. Perception of emotional facial expressions in individuals with high Autism-spectrum Quotient (AQ

    Directory of Open Access Journals (Sweden)

    Ervin Poljac

    2012-10-01

    Full Text Available Autism is characterized by difficulties in social interaction, communication, restrictive and repetitive behaviours and specific impairments in emotional processing. The present study employed The Autism Spectrum Quotient (Baron-Cohen et al. 2006 to quantify autistic traits in a group of 260 healthy individuals and to investigate whether this measure is related to the perception of facial emotional expressions. The emotional processing of twelve participants that scored significantly higher than the average on the AQ was compared to twelve participants with significantly lower AQ scores. Perception of emotional expressions was estimated by The Facial Recognition Task (Montagne et al. 2007. There were significant differences between the two groups with regard to accuracy and sensitivity of the perception of emotional facial expressions. Specifically, the group with high AQ score was less accurate and needed higher emotional content to recognize emotions of anger, disgust, happiness and sadness. This result implies a selective impairment that might be helpful in understanding the psychopathology of autism spectrum disorders.

  16. Direction of Amygdala-Neocortex Interaction During Dynamic Facial Expression Processing.

    Science.gov (United States)

    Sato, Wataru; Kochiyama, Takanori; Uono, Shota; Yoshikawa, Sakiko; Toichi, Motomi

    2017-03-01

    Dynamic facial expressions of emotion strongly elicit multifaceted emotional, perceptual, cognitive, and motor responses. Neuroimaging studies revealed that some subcortical (e.g., amygdala) and neocortical (e.g., superior temporal sulcus and inferior frontal gyrus) brain regions and their functional interaction were involved in processing dynamic facial expressions. However, the direction of the functional interaction between the amygdala and the neocortex remains unknown. To investigate this issue, we re-analyzed functional magnetic resonance imaging (fMRI) data from 2 studies and magnetoencephalography (MEG) data from 1 study. First, a psychophysiological interaction analysis of the fMRI data confirmed the functional interaction between the amygdala and neocortical regions. Then, dynamic causal modeling analysis was used to compare models with forward, backward, or bidirectional effective connectivity between the amygdala and neocortical networks in the fMRI and MEG data. The results consistently supported the model of effective connectivity from the amygdala to the neocortex. Further increasing time-window analysis of the MEG demonstrated that this model was valid after 200 ms from the stimulus onset. These data suggest that emotional processing in the amygdala rapidly modulates some neocortical processing, such as perception, recognition, and motor mimicry, when observing dynamic facial expressions of emotion. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Comparative Study of Human Age Estimation with or without Preclassification of Gender and Facial Expression

    Science.gov (United States)

    Cho, So Ra; Shin, Kwang Yong; Bang, Jae Won; Park, Kang Ryoung

    2014-01-01

    Age estimation has many useful applications, such as age-based face classification, finding lost children, surveillance monitoring, and face recognition invariant to age progression. Among many factors affecting age estimation accuracy, gender and facial expression can have negative effects. In our research, the effects of gender and facial expression on age estimation using support vector regression (SVR) method are investigated. Our research is novel in the following four ways. First, the accuracies of age estimation using a single-level local binary pattern (LBP) and a multilevel LBP (MLBP) are compared, and MLBP shows better performance as an extractor of texture features globally. Second, we compare the accuracies of age estimation using global features extracted by MLBP, local features extracted by Gabor filtering, and the combination of the two methods. Results show that the third approach is the most accurate. Third, the accuracies of age estimation with and without preclassification of facial expression are compared and analyzed. Fourth, those with and without preclassification of gender are compared and analyzed. The experimental results show the effectiveness of gender preclassification in age estimation. PMID:25295308

  18. Age-Related Response Bias in the Decoding of Sad Facial Expressions

    Directory of Open Access Journals (Sweden)

    Mara Fölster

    2015-10-01

    Full Text Available Recent studies have found that age is negatively associated with the accuracy of decoding emotional facial expressions; this effect of age was found for actors as well as for raters. Given that motivational differences and stereotypes may bias the attribution of emotion, the aim of the present study was to explore whether these age effects are due to response bias, that is, the unbalanced use of response categories. Thirty younger raters (19–30 years and thirty older raters (65–81 years viewed video clips of younger and older actors representing the same age ranges, and decoded their facial expressions. We computed both raw hit rates and bias-corrected hit rates to assess the influence of potential age-related response bias on decoding accuracy. Whereas raw hit rates indicated significant effects of both the actors’ and the raters’ ages on decoding accuracy for sadness, these age effects were no longer significant when response bias was corrected. Our results suggest that age effects on the accuracy of decoding facial expressions may be due, at least in part, to age-related response bias.

  19. Videos of conspecifics elicit interactive looking patterns and facial expressions in monkeys.

    Science.gov (United States)

    Mosher, Clayton P; Zimmerman, Prisca E; Gothard, Katalin M

    2011-08-01

    A broader understanding of the neural basis of social behavior in primates requires the use of species-specific stimuli that elicit spontaneous, but reproducible and tractable behaviors. In this context of natural behaviors, individual variation can further inform about the factors that influence social interactions. To approximate natural social interactions similar to those documented by field studies, we used unedited video footage to induce in viewer monkeys spontaneous facial expressions and looking patterns in the laboratory setting. Three adult male monkeys (Macaca mulatta), previously behaviorally and genetically (5-HTTLPR) characterized, were monitored while they watched 10 s video segments depicting unfamiliar monkeys (movie monkeys) displaying affiliative, neutral, and aggressive behaviors. The gaze and head orientation of the movie monkeys alternated between "averted" and "directed" at the viewer. The viewers were not reinforced for watching the movies, thus their looking patterns indicated their interest and social engagement with the stimuli. The behavior of the movie monkey accounted for differences in the looking patterns and facial expressions displayed by the viewers. We also found multiple significant differences in the behavior of the viewers that correlated with their interest in these stimuli. These socially relevant dynamic stimuli elicited spontaneous social behaviors, such as eye-contact induced reciprocation of facial expression, gaze aversion, and gaze following, that were previously not observed in response to static images. This approach opens a unique opportunity to understanding the mechanisms that trigger spontaneous social behaviors in humans and nonhuman primates. (PsycINFO Database Record (c) 2011 APA, all rights reserved).

  20. Geometric Feature-Based Facial Expression Recognition in Image Sequences Using Multi-Class AdaBoost and Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Joonwhoan Lee

    2013-06-01

    Full Text Available Facial expressions are widely used in the behavioral interpretation of emotions, cognitive science, and social interactions. In this paper, we present a novel method for fully automatic facial expression recognition in facial image sequences. As the facial expression evolves over time facial landmarks are automatically tracked in consecutive video frames, using displacements based on elastic bunch graph matching displacement estimation. Feature vectors from individual landmarks, as well as pairs of landmarks tracking results are extracted, and normalized, with respect to the first frame in the sequence. The prototypical expression sequence for each class of facial expression is formed, by taking the median of the landmark tracking results from the training facial expression sequences. Multi-class AdaBoost with dynamic time warping similarity distance between the feature vector of input facial expression and prototypical facial expression, is used as a weak classifier to select the subset of discriminative feature vectors. Finally, two methods for facial expression recognition are presented, either by using multi-class AdaBoost with dynamic time warping, or by using support vector machine on the boosted feature vectors. The results on the Cohn-Kanade (CK+ facial expression database show a recognition accuracy of 95.17% and 97.35% using multi-class AdaBoost and support vector machines, respectively.

  1. Discrimination thresholds for smiles in genuine versus blended facial expressions

    Directory of Open Access Journals (Sweden)

    Aida Gutiérrez-García

    2015-12-01

    Full Text Available Genuine smiles convey enjoyment or positive affect, whereas fake smiles conceal or leak negative feelings or motives (e.g. arrogance, contempt, embarrassment, or merely show affiliation or politeness. We investigated the minimum display time (i.e. threshold; ranging from 50 to 1,000 ms that is necessary to distinguish a fake from a genuine smile. Variants of fake smiles were created by varying the type of non-happy (e.g. neutral, angry, sad, etc. eyes in blended expressions with a smiling mouth. Participants judged whether faces conveyed happiness or not. Results showed that thresholds vary as a function of type of eyes: blended expressions with angry eyes are discriminated early (100 ms, followed by those with disgusted eyes, fearful, and sad (from 250 to 500 ms, surprised (750 ms, and neutral (from 750 to 1,000 ms eyes. An important issue for further research is the extent to which such discrimination threshold differences depend on physical or affective factors.

  2. To Capture a Face: A Novel Technique for the Analysis and Quantification of Facial Expressions in American Sign Language

    Science.gov (United States)

    Grossman, Ruth B.; Kegl, Judy

    2006-01-01

    American Sign Language uses the face to express vital components of grammar in addition to the more universal expressions of emotion. The study of ASL facial expressions has focused mostly on the perception and categorization of various expression types by signing and nonsigning subjects. Only a few studies of the production of ASL facial…

  3. Spatial and temporal analysis of gene expression during growth and fusion of the mouse facial prominences.

    Science.gov (United States)

    Feng, Weiguo; Leach, Sonia M; Tipney, Hannah; Phang, Tzulip; Geraci, Mark; Spritz, Richard A; Hunter, Lawrence E; Williams, Trevor

    2009-12-16

    Orofacial malformations resulting from genetic and/or environmental causes are frequent human birth defects yet their etiology is often unclear because of insufficient information concerning the molecular, cellular and morphogenetic processes responsible for normal facial development. We have, therefore, derived a comprehensive expression dataset for mouse orofacial development, interrogating three distinct regions - the mandibular, maxillary and frontonasal prominences. To capture the dynamic changes in the transcriptome during face formation, we sampled five time points between E10.5-E12.5, spanning the developmental period from establishment of the prominences to their fusion to form the mature facial platform. Seven independent biological replicates were used for each sample ensuring robustness and quality of the dataset. Here, we provide a general overview of the dataset, characterizing aspects of gene expression changes at both the spatial and temporal level. Considerable coordinate regulation occurs across the three prominences during this period of facial growth and morphogenesis, with a switch from expression of genes involved in cell proliferation to those associated with differentiation. An accompanying shift in the expression of polycomb and trithorax genes presumably maintains appropriate patterns of gene expression in precursor or differentiated cells, respectively. Superimposed on the many coordinated changes are prominence-specific differences in the expression of genes encoding transcription factors, extracellular matrix components, and signaling molecules. Thus, the elaboration of each prominence will be driven by particular combinations of transcription factors coupled with specific cell:cell and cell:matrix interactions. The dataset also reveals several prominence-specific genes not previously associated with orofacial development, a subset of which we externally validate. Several of these latter genes are components of bidirectional

  4. Tactile Stimulation of the Face and the Production of Facial Expressions Activate Neurons in the Primate Amygdala.

    Science.gov (United States)

    Mosher, Clayton P; Zimmerman, Prisca E; Fuglevand, Andrew J; Gothard, Katalin M

    2016-01-01

    The majority of neurophysiological studies that have explored the role of the primate amygdala in the evaluation of social signals have relied on visual stimuli such as images of facial expressions. Vision, however, is not the only sensory modality that carries social signals. Both humans and nonhuman primates exchange emotionally meaningful social signals through touch. Indeed, social grooming in nonhuman primates and caressing touch in humans is critical for building lasting and reassuring social bonds. To determine the role of the amygdala in processing touch, we recorded the responses of single neurons in the macaque amygdala while we applied tactile stimuli to the face. We found that one-third of the recorded neurons responded to tactile stimulation. Although we recorded exclusively from the right amygdala, the receptive fields of 98% of the neurons were bilateral. A fraction of these tactile neurons were monitored during the production of facial expressions and during facial movements elicited occasionally by touch stimuli. Firing rates arising during the production of facial expressions were similar to those elicited by tactile stimulation. In a subset of cells, combining tactile stimulation with facial movement further augmented the firing rates. This suggests that tactile neurons in the amygdala receive input from skin mechanoceptors that are activated by touch and by compressions and stretches of the facial skin during the contraction of the underlying muscles. Tactile neurons in the amygdala may play a role in extracting the valence of touch stimuli and/or monitoring the facial expressions of self during social interactions.

  5. Evidence for Anger Saliency during the Recognition of Chimeric Facial Expressions of Emotions in Underage Ebola Survivors

    OpenAIRE

    Ardizzi, Martina; Evangelista, Valentina; Ferroni, Francesca; Maria A. Umiltà; Ravera, Roberto; Gallese, Vittorio

    2017-01-01

    One of the crucial features defining basic emotions and their prototypical facial expressions is their value for survival. Childhood traumatic experiences affect the effective recognition of facial expressions of negative emotions, normally allowing the recruitment of adequate behavioral responses to environmental threats. Specifically, anger becomes an extraordinarily salient stimulus unbalancing victims’ recognition of negative emotions. Despite the plethora of studies on this topic, to dat...

  6. Effects of face feature and contour crowding in facial expression adaptation.

    Science.gov (United States)

    Liu, Pan; Montaser-Kouhsari, Leila; Xu, Hong

    2014-12-01

    Prolonged exposure to a visual stimulus, such as a happy face, biases the perception of subsequently presented neutral face toward sad perception, the known face adaptation. Face adaptation is affected by visibility or awareness of the adapting face. However, whether it is affected by discriminability of the adapting face is largely unknown. In the current study, we used crowding to manipulate discriminability of the adapting face and test its effect on face adaptation. Instead of presenting flanking faces near the target face, we shortened the distance between facial features (internal feature crowding), and reduced the size of face contour (external contour crowding), to introduce crowding. We are interested in whether internal feature crowding or external contour crowding is more effective in inducing crowding effect in our first experiment. We found that combining internal feature and external contour crowding, but not either of them alone, induced significant crowding effect. In Experiment 2, we went on further to investigate its effect on adaptation. We found that both internal feature crowding and external contour crowding reduced its facial expression aftereffect (FEA) significantly. However, we did not find a significant correlation between discriminability of the adapting face and its FEA. Interestingly, we found a significant correlation between discriminabilities of the adapting and test faces. Experiment 3 found that the reduced adaptation aftereffect in combined crowding by the external face contour and the internal facial features cannot be decomposed into the effects from the face contour and facial features linearly. It thus suggested a nonlinear integration between facial features and face contour in face adaptation.

  7. Exploring combinations of different color and facial expression stimuli for gaze-independent BCIs

    Directory of Open Access Journals (Sweden)

    Long eChen

    2016-01-01

    Full Text Available AbstractBackground: Some studies have proven that a conventional visual brain computer interface (BCI based on overt attention cannot be used effectively when eye movement control is not possible. To solve this problem, a novel visual-based BCI system based on covert attention and feature attention had been proposed and was called the gaze-independent BCI. Color and shape difference between stimuli and backgrounds have generally been used in examples of gaze-independent BCIs. Recently, a new paradigm based on facial expression change had been presented, and obtained high performance. However, some facial expressions were so similar that users couldn’t tell them apart. Especially they were presented at the same position in a rapid serial visual presentation (RSVP paradigm. Consequently, the performance of BCIs is reduced.New Method: In this paper, we combined facial expressions and colors to optimize the stimuli presentation in the gaze-independent BCI. This optimized paradigm was called the colored dummy face pattern. It is suggested that different colors and facial expressions could help subjects to locate the target and evoke larger event-related potentials (ERPs. In order to evaluate the performance of this new paradigm, two other paradigms were presented, called the grey dummy face pattern and the colored ball pattern. Comparison with Existing Method(s: The key point that determined the value of the colored dummy faces stimuli in BCI systems were whether dummy face stimuli could obtain higher performance than grey faces or colored balls stimuli. Ten healthy subjects (7 male, aged 21-26 years, mean 24.5±1.25 participated in our experiment. Online and offline results of four different paradigms were obtained and comparatively analyzed.Results: The results showed that the colored dummy face pattern could evoke higher P300 and N400 ERP amplitudes, compared with the grey dummy face pattern and the colored ball pattern. Online results showed

  8. Facial expression of fear in the context of human ethology: Recognition advantage in the perception of male faces.

    Science.gov (United States)

    Trnka, Radek; Tavel, Peter; Tavel, Peter; Hasto, Jozef

    2015-01-01

    Facial expression is one of the core issues in the ethological approach to the study of human behaviour. This study discusses sex-specific aspects of the recognition of the facial expression of fear using results from our previously published experimental study. We conducted an experiment in which 201 participants judged seven different facial expressions: anger, contempt, disgust, fear, happiness, sadness and surprise (Trnka et al. 2007). Participants were able to recognize the facial expression of fear significantly better on a male face than on a female face. Females also recognized fear generally better than males. The present study provides a new interpretation of this sex difference in the recognition of fear. We interpret these results within the paradigm of human ethology, taking into account the adaptive function of the facial expression of fear. We argue that better detection of fear might be crucial for females under a situation of serious danger in groups of early hominids. The crucial role of females in nurturing and protecting offspring was fundamental for the reproductive potential of the group. A clear decoding of this alarm signal might thus have enabled the timely preparation of females for escape or defence to protect their health for successful reproduction. Further, it is likely that males played the role of guardians of social groups and that they were responsible for effective warnings of the group under situations of serious danger. This may explain why the facial expression of fear is better recognizable on the male face than on the female face.

  9. A neural network underlying intentional emotional facial expression in neurodegenerative disease.

    Science.gov (United States)

    Gola, Kelly A; Shany-Ur, Tal; Pressman, Peter; Sulman, Isa; Galeana, Eduardo; Paulsen, Hillary; Nguyen, Lauren; Wu, Teresa; Adhimoolam, Babu; Poorzand, Pardis; Miller, Bruce L; Rankin, Katherine P

    2017-01-01

    Intentional facial expression of emotion is critical to healthy social interactions. Patients with neurodegenerative disease, particularly those with right temporal or prefrontal atrophy, show dramatic socioemotional impairment. This was an exploratory study examining the neural and behavioral correlates of intentional facial expression of emotion in neurodegenerative disease patients and healthy controls. One hundred and thirty three participants (45 Alzheimer's disease, 16 behavioral variant frontotemporal dementia, 8 non-fluent primary progressive aphasia, 10 progressive supranuclear palsy, 11 right-temporal frontotemporal dementia, 9 semantic variant primary progressive aphasia patients and 34 healthy controls) were video recorded while imitating static images of emotional faces and producing emotional expressions based on verbal command; the accuracy of their expression was rated by blinded raters. Participants also underwent face-to-face socioemotional testing and informants described participants' typical socioemotional behavior. Patients' performance on emotion expression tasks was correlated with gray matter volume using voxel-based morphometry (VBM) across the entire sample. We found that intentional emotional imitation scores were related to fundamental socioemotional deficits; patients with known socioemotional deficits performed worse than controls on intentional emotion imitation; and intentional emotional expression predicted caregiver ratings of empathy and interpersonal warmth. Whole brain VBMs revealed a rightward cortical atrophy pattern homologous to the left lateralized speech production network was associated with intentional emotional imitation deficits. Results point to a possible neural mechanisms underlying complex socioemotional communication deficits in neurodegenerative disease patients.

  10. Compensatory Expressive Behavior for Facial Paralysis: Adaptation to Congenital or Acquired Disability

    Science.gov (United States)

    Bogart, Kathleen R.; Tickle-Degnen, Linda; Ambady, Nalini

    2015-01-01

    Purpose/Objective Although there has been little research on the adaptive behavior of people with congenital compared to acquired disability, there is reason to predict that people with congenital conditions may be better adapted because they have lived with their conditions for their entire lives (Smart, 2008). We examined whether people with congenital facial paralysis (FP), compared to people with acquired FP, compensate more for impoverished facial expression by using alternative channels of expression (i.e. voice and body). Research Method/Design Participants with congenital (n = 13) and acquired (n = 14) FP were videotaped while recalling emotional events. Main Outcome Measures Expressive verbal behavior was measured using the Linguistic Inquiry Word Count (Pennebaker, Booth & Francis, 2007). Nonverbal behavior and FP severity were rated by trained coders. Results People with congenital FP, compared to acquired FP, used more compensatory expressive verbal and nonverbal behavior in their language, voices, and bodies. The extent of FP severity had little effect on compensatory expressivity. Conclusions/Implications This study provides the first behavioral evidence that people with congenital FP use more adaptations to express themselves than people with acquired FP. These behaviors could inform social functioning interventions for people with FP. PMID:22369116

  11. A neural network underlying intentional emotional facial expression in neurodegenerative disease

    Directory of Open Access Journals (Sweden)

    Kelly A. Gola

    2017-01-01

    Full Text Available Intentional facial expression of emotion is critical to healthy social interactions. Patients with neurodegenerative disease, particularly those with right temporal or prefrontal atrophy, show dramatic socioemotional impairment. This was an exploratory study examining the neural and behavioral correlates of intentional facial expression of emotion in neurodegenerative disease patients and healthy controls. One hundred and thirty three participants (45 Alzheimer's disease, 16 behavioral variant frontotemporal dementia, 8 non-fluent primary progressive aphasia, 10 progressive supranuclear palsy, 11 right-temporal frontotemporal dementia, 9 semantic variant primary progressive aphasia patients and 34 healthy controls were video recorded while imitating static images of emotional faces and producing emotional expressions based on verbal command; the accuracy of their expression was rated by blinded raters. Participants also underwent face-to-face socioemotional testing and informants described participants' typical socioemotional behavior. Patients' performance on emotion expression tasks was correlated with gray matter volume using voxel-based morphometry (VBM across the entire sample. We found that intentional emotional imitation scores were related to fundamental socioemotional deficits; patients with known socioemotional deficits performed worse than controls on intentional emotion imitation; and intentional emotional expression predicted caregiver ratings of empathy and interpersonal warmth. Whole brain VBMs revealed a rightward cortical atrophy pattern homologous to the left lateralized speech production network was associated with intentional emotional imitation deficits. Results point to a possible neural mechanisms underlying complex socioemotional communication deficits in neurodegenerative disease patients.

  12. The Neurospora Transcription Factor ADV-1 Transduces Light Signals and Temporal Information to Control Rhythmic Expression of Genes Involved in Cell Fusion

    Science.gov (United States)

    Dekhang, Rigzin; Wu, Cheng; Smith, Kristina M.; Lamb, Teresa M.; Peterson, Matthew; Bredeweg, Erin L.; Ibarra, Oneida; Emerson, Jillian M.; Karunarathna, Nirmala; Lyubetskaya, Anna; Azizi, Elham; Hurley, Jennifer M.; Dunlap, Jay C.; Galagan, James E.; Freitag, Michael; Sachs, Matthew S.; Bell-Pedersen, Deborah

    2016-01-01

    Light and the circadian clock have a profound effect on the biology of organisms through the regulation of large sets of genes. Toward understanding how light and the circadian clock regulate gene expression, we used genome-wide approaches to identify the direct and indirect targets of the light-responsive and clock-controlled transcription factor ADV-1 in Neurospora crassa. A large proportion of ADV-1 targets were found to be light- and/or clock-controlled, and enriched for genes involved in development, metabolism, cell growth, and cell fusion. We show that ADV-1 is necessary for transducing light and/or temporal information to its immediate downstream targets, including controlling rhythms in genes critical to somatic cell fusion. However, while ADV-1 targets are altered in predictable ways in Δadv-1 cells in response to light, this is not always the case for rhythmic target gene expression. These data suggest that a complex regulatory network downstream of ADV-1 functions to generate distinct temporal dynamics of target gene expression relative to the central clock mechanism. PMID:27856696

  13. The Neurospora Transcription Factor ADV-1 Transduces Light Signals and Temporal Information to Control Rhythmic Expression of Genes Involved in Cell Fusion

    Directory of Open Access Journals (Sweden)

    Rigzin Dekhang

    2017-01-01

    Full Text Available Light and the circadian clock have a profound effect on the biology of organisms through the regulation of large sets of genes. Toward understanding how light and the circadian clock regulate gene expression, we used genome-wide approaches to identify the direct and indirect targets of the light-responsive and clock-controlled transcription factor ADV-1 in Neurospora crassa. A large proportion of ADV-1 targets were found to be light- and/or clock-controlled, and enriched for genes involved in development, metabolism, cell growth, and cell fusion. We show that ADV-1 is necessary for transducing light and/or temporal information to its immediate downstream targets, including controlling rhythms in genes critical to somatic cell fusion. However, while ADV-1 targets are altered in predictable ways in Δadv-1 cells in response to light, this is not always the case for rhythmic target gene expression. These data suggest that a complex regulatory network downstream of ADV-1 functions to generate distinct temporal dynamics of target gene expression relative to the central clock mechanism.

  14. Artificial Neural Networks and Gene Expression Programing based age estimation using facial features

    Directory of Open Access Journals (Sweden)

    Baddrud Z. Laskar

    2015-10-01

    Full Text Available This work is about estimating human age automatically through analysis of facial images. It has got a lot of real-world applications. Due to prompt advances in the fields of machine vision, facial image processing, and computer graphics, automatic age estimation via faces in computer is one of the dominant topics these days. This is due to widespread real-world applications, in areas of biometrics, security, surveillance, control, forensic art, entertainment, online customer management and support, along with cosmetology. As it is difficult to estimate the exact age, this system is to estimate a certain range of ages. Four sets of classifications have been used to differentiate a person’s data into one of the different age groups. The uniqueness about this study is the usage of two technologies i.e., Artificial Neural Networks (ANN and Gene Expression Programing (GEP to estimate the age and then compare the results. New methodologies like Gene Expression Programing (GEP have been explored here and significant results were found. The dataset has been developed to provide more efficient results by superior preprocessing methods. This proposed approach has been developed, tested and trained using both the methods. A public data set was used to test the system, FG-NET. The quality of the proposed system for age estimation using facial features is shown by broad experiments on the available database of FG-NET.

  15. El Infant Facial Expressions of Emotions from Looking at Pictures. Versión peruana

    Directory of Open Access Journals (Sweden)

    Pierina Traverso

    2012-12-01

    Full Text Available The Infant Facial Expressions Of Emotions From Looking at Pictures. Peruvian versionThe Peruvian version of the Infant Facial Expression of Emotions from Looking at Pictures (IFEEL, instrument that assessed the interpretation of emotions from children’s faces pictures is presented. The original version from Emde, Osofsky & Butterfield (1993 was developed in the United States and involves 30 stimuli. The Peruvian version involves 25 pictures of children with prototypic facial features of the majority of Peruvian population. A sample of 363 men and women of middle and low socio-economic status between 19 and 45 years old was recruited to develop the Peruvian version. From the results, a lexicon was created with the words that were used by the participants to designate the 14 groups of emotion that were obtained. The majority of these groups had an adequate reliability for temporal stability. Finally, it was found that the socio-economic status (SES is a variable that generates significant differences in the way how persons interpret the emotions. Therefore, referential values of differentiated interpretation were created from this variable.

  16. Interpretation of facial expressions of affect in children with learning disabilities with verbal or nonverbal deficits.

    Science.gov (United States)

    Dimitrovsky, L; Spector, H; Levy-Shiff, R; Vakil, E

    1998-01-01

    The ability to identify facial expressions of happiness, sadness, anger, surprise,fear, and disgust was studied in 48 nondisabled children and 76 children with learning disabilities aged 9 through 12. On the basis of their performance on the Rey Auditory-Verbal Learning Test and the Benton Visual Retention Test, the LD group was divided into three subgroups: those with verbal deficits (VD), nonverbal deficits (NVD), and both verbal and nonverbal (BD) deficits. The measure of ability to interpret facial expressions of affect was a shortened version of Ekman and Friesen's Pictures of Facial Affect. Overall, the nondisabled group had better interpretive ability than the three learning disabled groups and the VD group had better ability than the NVD and BD groups. Although the identification level of the nondisabled group differed from that of the VD group only for surprise, it was superior to that of the NVD and BD groups for four of the six emotions. Happiness was the easiest to identify, and the remaining emotions in ascending order of difficulty were anger, surprise, sadness, fear, and disgust. Older subjects did better than younger ones only for fear and disgust, and boys and girls did not differ in interpretive ability. These findings are discussed in terms of the need to take note of the heterogeneity of the learning disabled population and the particular vulnerability to social imperception of children with nonverbal deficits.

  17. Response inhibition is modulated by functional cerebral asymmetries for facial expression perception

    Directory of Open Access Journals (Sweden)

    Sebastian eOcklenburg

    2013-11-01

    Full Text Available The efficacy of executive functions is critically modulated by information processing in earlier cognitive stages. For example, initial processing of verbal stimuli in the language-dominant left-hemisphere leads to more efficient response inhibition than initial processing of verbal stimuli in the non-dominant right hemisphere. However, it is unclear whether this organizational principle is specific for the language system, or a general principle that also applies to other types of lateralized cognition. To answer this question, we investigated the neurophysiological correlates of early attentional processes, facial expression perception and response inhibition during tachistoscopic presentation of facial ‘Go’ and ‘Nogo’ stimuli in the left and the right visual field. Participants committed fewer false alarms after Nogo-stimulus presentation in the left compared to the right visual field. This right-hemispheric asymmetry on the behavioral level was also reflected in the neurophysiological correlates of face perception, specifically in a right-sided asymmetry in the N170 amplitude. Moreover, the right-hemispheric dominance for facial expression processing also affected event-related potentials typically related to response inhibition, namely the Nogo-N2 and Nogo-P3. These findings show that an effect of hemispheric asymmetries in early information processing on the efficacy of higher cognitive functions is not limited to left-hemispheric language functions, but can be generalized to predominantly right-hemispheric functions.

  18. Responses in the right posterior superior temporal sulcus show a feature-based response to facial expression.

    Science.gov (United States)

    Flack, Tessa R; Andrews, Timothy J; Hymers, Mark; Al-Mosaiwi, Mohammed; Marsden, Samuel P; Strachan, James W A; Trakulpipat, Chayanit; Wang, Liang; Wu, Tian; Young, Andrew W

    2015-08-01

    The face-selective region of the right posterior superior temporal sulcus (pSTS) plays an important role in analysing facial expressions. However, it is less clear how facial expressions are represented in this region. In this study, we used the face composite effect to explore whether the pSTS contains a holistic or feature-based representation of facial expression. Aligned and misaligned composite images were created from the top and bottom halves of faces posing different expressions. In Experiment 1, participants performed a behavioural matching task in which they judged whether the top half of two images was the same or different. The ability to discriminate the top half of the face was affected by changes in the bottom half of the face when the images were aligned, but not when they were misaligned. This shows a holistic behavioural response to expression. In Experiment 2, we used fMR-adaptation to ask whether the pSTS has a corresponding holistic neural representation of expression. Aligned or misaligned images were presented in blocks that involved repeating the same image or in which the top or bottom half of the images changed. Increased neural responses were found in the right pSTS regardless of whether the change occurred in the top or bottom of the image, showing that changes in expression were detected across all parts of the face. However, in contrast to the behavioural data, the pattern did not differ between aligned and misaligned stimuli. This suggests that the pSTS does not encode facial expressions holistically. In contrast to the pSTS, a holistic pattern of response to facial expression was found in the right inferior frontal gyrus (IFG). Together, these results suggest that pSTS reflects an early stage in the processing of facial expression in which facial features are represented independently. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Facial expression and vocal pitch height: Evidence of an intermodal association

    DEFF Research Database (Denmark)

    Huron, David; Dahl, Sofia; Johnson, Randolph

    2009-01-01

    Forty-four participants were asked to sing moderate, high, and low  pitches while their faces were photographed. In a two-alternative forced choice task,  independent judges selected the high-pitch faces as more friendly than the low-pitch  faces. When photographs were cropped to show only the ey...... eyebrow position and sung pitch—consistent with the role of eyebrows in signaling aggression and appeasement. Overall, the results are consistent with an inter-modal linkage between vocal and facial expressions....... region, judges still rated the high-pitch faces friendlier than the low-pitch faces. These results are consistent with prior research showing that vocal pitch height is used to signal aggression (low pitch) or appeasement (high pitch). An analysis of the facial features shows a strong correlation between...

  20. Facial Position and Expression-Based Human-Computer Interface for Persons With Tetraplegia.

    Science.gov (United States)

    Bian, Zhen-Peng; Hou, Junhui; Chau, Lap-Pui; Magnenat-Thalmann, Nadia

    2016-05-01

    A human-computer interface (namely Facial position and expression Mouse system, FM) for the persons with tetraplegia based on a monocular infrared depth camera is presented in this paper. The nose position along with the mouth status (close/open) is detected by the proposed algorithm to control and navigate the cursor as computer user input. The algorithm is based on an improved Randomized Decision Tree, which is capable of detecting the facial information efficiently and accurately. A more comfortable user experience is achieved by mapping the nose motion to the cursor motion via a nonlinear function. The infrared depth camera enables the system to be independent of illumination and color changes both from the background and on human face, which is a critical advantage over RGB camera-based options. Extensive experimental results show that the proposed system outperforms existing assistive technologies in terms of quantitative and qualitative assessments.

  1. Effects of facial expression and gaze direction on approach-avoidance behaviour.

    Science.gov (United States)

    Ozono, Hiroki; Watabe, Motoki; Yoshikawa, Sakiko

    2012-01-01

    Humans must coordinate approach-avoidance behaviours with the social cues that elicit them, such as facial expressions and gaze direction. We hypothesised that when someone is observed looking in a particular direction with a happy expression, the observer would tend to approach that direction, but that when someone is observed looking in a particular direction with a fearful expression, the observer would tend to avoid that direction. Twenty-eight participants viewed stimulus faces with averted gazes and happy or fearful expressions on a computer screen. Participants were asked to grasp (approach) or withdraw from (avoid) a left- or right-hand button depending on the stimulus face's expression. The results were consistent with our hypotheses about avoidance responses, but not with respect to approach responses. Links between social cues and adaptive behaviour are discussed.

  2. Scoliosis in rhythmic gymnasts.

    Science.gov (United States)

    Tanchev, P I; Dzherov, A D; Parushev, A D; Dikov, D M; Todorov, M B

    2000-06-01

    An anamnestic, clinical, radiographic study of 100 girls actively engaged in rhythmic gymnastics was performed in an attempt to explain the higher incidence and the specific features of scoliosis in rhythmic gymnastic trainees. To analyze the anthropometry, the regimen of motion and dieting, the specificity of training in rhythmic gymnastics, and the growth and maturing of the trainees, and to outline the characteristics of the scoliotic curves observed. An etiologic hypothesis for this specific subgroup of scoliosis is proposed. The etiology of scoliosis remains unknown in most cases despite extensive research. In the current classifications, no separate type of sports-associated scoliosis is suggested. The examinations included anamnesis, weight and height measurements, growth and maturing data, eating regimen, general and back status, duration, intensity, and specific elements of rhythmic gymnastic training. Radiographs were taken in all the patients with suspected scoliosis. The results obtained were compared with the parameters of normal girls not involved in sports. A 10-fold higher incidence of scoliosis was found in rhythmic gymnastic trainees (12%) than in their normal coevals (1.1%). Delay in menarche and generalized joint laxity are common in rhythmic gymnastic trainees. The authors observed a significant physical loading with the persistently repeated asymmetric stress on the growing spine associated with the nature of rhythmic gymnastics. Some specific features of scoliosis related to rhythmic gymnastics were found also. This study identified a separate scoliotic entity associated with rhythmic gymnastics. The results strongly suggest the important etiologic role of a "dangerous triad": generalized joint laxity, delayed maturity, and asymmetric spinal loading.

  3. Association between facial expression and PTSD symptoms among young children exposed to the Great East Japan Earthquake: A pilot study

    Directory of Open Access Journals (Sweden)

    Takeo eFujiwara

    2015-10-01

    Full Text Available Emotional numbing is a symptom of post-traumatic stress disorder (PTSD characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip, and to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent’s Report of the Child’s Reaction to Stress scale. Children were filmed while watching a 2-minute video compilation of natural scenes (‘baseline video’ followed by a 2-minute video clip from a television comedy (‘comedy video’. Children’s facial expressions were processed using Noldus FaceReader software, which implements the Facial Action Coding System (FACS. We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age and baseline facial expression (p < .05. This pilot study suggests that facial emotion reactivity could provide an index against which emotional numbing could be measured in young children, using facial expression recognition software. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children’s reactions to disasters.

  4. Diminished facial emotion expression and associated clinical characteristics in Anorexia Nervosa.

    Science.gov (United States)

    Lang, Katie; Larsson, Emma E C; Mavromara, Liza; Simic, Mima; Treasure, Janet; Tchanturia, Kate

    2016-02-28

    This study aimed to investigate emotion expression in a large group of children, adolescents and adults with Anorexia Nervosa (AN), and investigate the associated clinical correlates. One hundred and forty-one participants (AN=66, HC= 75) were recruited and positive and negative film clips were used to elicit emotion expressions. The Facial Activation Coding system (FACES) was used to code emotion expression. Subjective ratings of emotion were collected. Individuals with AN displayed less positive emotions during the positive film clip compared to healthy controls (HC). There was no significant difference between the groups on the Positive and Negative Affect Scale (PANAS). The AN group displayed emotional incongruence (reporting a different emotion to what would be expected given the stimuli, with limited facial affect to signal the emotion experienced), whereby they reported feeling significantly higher rates of negative emotion during the positive clip. There were no differences in emotion expression between the groups during the negative film clip. Despite this individuals with AN reported feeling significantly higher levels of negative emotions during the negative clip. Diminished positive emotion expression was associated with more severe clinical symptoms, which could suggest that these individuals represent a group with serious social difficulties, which may require specific attention in treatment. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  5. Association of the circadian rhythmic expression of GmCRY1a with a latitudinal cline in photoperiodic flowering of soybean.

    Science.gov (United States)

    Zhang, Qingzhu; Li, Hongyu; Li, Rui; Hu, Ruibo; Fan, Chengming; Chen, Fulu; Wang, Zonghua; Liu, Xu; Fu, Yongfu; Lin, Chentao

    2008-12-30

    Photoperiodic control of flowering time is believed to affect latitudinal distribution of plants. The blue light receptor CRY2 regulates photoperiodic flowering in the experimental model plant Arabidopsis thaliana. However, it is unclear whether genetic variations affecting cryptochrome activity or expression is broadly associated with latitudinal distribution of plants. We report here an investigation of the function and expression of two cryptochromes in soybean, GmCRY1a and GmCRY2a. Soybean is a short-day (SD) crop commonly cultivated according to the photoperiodic sensitivity of cultivars. Both cultivated soybean (Glycine max) and its wild relative (G. soja) exhibit a strong latitudinal cline in photoperiodic flowering. Similar to their Arabidopsis counterparts, both GmCRY1a and GmCRY2a affected blue light inhibition of cell elongation, but only GmCRY2a underwent blue light- and 26S proteasome-dependent degradation. However, in contrast to Arabidopsis cryptochromes, soybean GmCRY1a, but not GmCRY2a, exhibited a strong activity promoting floral initiation, and the level of protein expression of GmCRY1a, but not GmCRY2a, oscillated with a circadian rhythm that has different phase characteristics in different photoperiods. Consistent with the hypothesis that GmCRY1a is a major regulator of photoperiodic flowering in soybean, the photoperiod-dependent circadian rhythmic expression of the GmCRY1a protein correlates with photoperiodic flowering and latitudinal distribution of soybean cultivars. We propose that genes affecting protein expression of the GmCRY1a protein play an important role in determining latitudinal distribution of soybeans.

  6. Human Amygdala Tracks a Feature-Based Valence Signal Embedded within the Facial Expression of Surprise.

    Science.gov (United States)

    Kim, M Justin; Mattek, Alison M; Bennett, Randi H; Solomon, Kimberly M; Shin, Jin; Whalen, Paul J

    2017-09-27

    Human amygdala function has been traditionally associated with processing the affective valence (negative vs positive) of an emotionally charged event, especially those that signal fear or threat. However, this account of human amygdala function can be explained by alternative views, which posit that the amygdala might be tuned to either (1) general emotional arousal (activation vs deactivation) or (2) specific emotion categories (fear vs happy). Delineating the pure effects of valence independent of arousal or emotion category is a challenging task, given that these variables naturally covary under many circumstances. To circumvent this issue and test the sensitivity of the human amygdala to valence values specifically, we measured the dimension of valence within the single facial expression category of surprise. Given the inherent valence ambiguity of this category, we show that surprised expression exemplars are attributed valence and arousal values that are uniquely and naturally uncorrelated. We then present fMRI data from both sexes, showing that the amygdala tracks these consensus valence values. Finally, we provide evidence that these valence values are linked to specific visual features of the mouth region, isolating the signal by which the amygdala detects this valence information. SIGNIFICANCE STATEMENT There is an open question as to whether human amygdala function tracks the valence value of cues in the environment, as opposed to either a more general emotional arousal value or a more specific emotion category distinction. Here, we demonstrate the utility of surprised facial expressions because exemplars within this emotion category take on valence values spanning the dimension of bipolar valence (positive to negative) at a consistent level of emotional arousal. Functional neuroimaging data showed that amygdala responses tracked the valence of surprised facial expressions, unconfounded by arousal. Furthermore, a machine learning classifier identified

  7. Biased processing of neutral facial expressions is associated with depressive symptoms and suicide ideation in individuals at risk for major depression due to affective temperaments.

    Science.gov (United States)

    Maniglio, Roberto; Gusciglio, Francesca; Lofrese, Valentina; Belvederi Murri, Martino; Tamburello, Antonino; Innamorati, Marco

    2014-04-01

    To elucidate whether abnormal facial emotion processing represents a vulnerability factor for major depression, some studies have explored deficits in emotion processing in individuals at familial risk for depression. Nevertheless, these studies have provided mixed results. However, no studies on facial emotion processing have been conducted in at-risk samples with early or attenuated signs of depression, such as individuals with affective temperaments who are characterized by subclinical depressive moods, cognitions, and behaviors that resemble those that occur in patients with major depression. Presence and severity of depressive symptoms, affective temperaments, death wishes, suicidal ideation, and suicide planning were explored in 231 participants with a mean age 39.9 years (SD=14.57). Participants also completed an emotion recognition task with 80 emotional face stimuli expressing fear, angry, sad, happy, and neutral facial expressions. Participants with higher scores on affective temperamental dimensions containing a depressive component, compared to those with lower scores, reported more depressive symptoms, death wishes, suicide ideation and planning, and an increased tendency to interpret neutral facial expressions as emotional facial expressions; in particular, neutral facial expressions were interpreted more negatively, mostly as sad facial expressions. However, there were no group differences in identification and discrimination of facial expressions of happiness, sadness, fear, and anger. A negative bias in interpretation of neutral facial expressions, but not accuracy deficits in recognizing emotional facial expressions, may represent a vulnerability factor for major depression. However, further research is needed. © 2014.

  8. Sinais vitais e expressão facial de pacientes em estado de coma Signos vitales y expresión facial de pacientes en estado de coma Vital signs and facial expression of patients in coma

    Directory of Open Access Journals (Sweden)

    Ana Cláudia Giesbrecht Puggina

    2009-06-01

    Full Text Available O objetivo foi verificar a influência da música e mensagem oral sobre os Sinais Vitais e Expressão Facial dos pacientes em coma fisiológico ou induzido. Realizou-se um Ensaio Clínico Controlado e Randomizado. A amostra consistiu-se de 30 pacientes de Unidade de Terapia Intensiva, que foram divididos em 2 grupos: Grupo Controle (sem estímulos auditivos e Grupo Experimental (com estímulos auditivos. Os pacientes foram submetidos a 3 sessões, em dias consecutivos. Encontraram-se alterações estatisticamente significativas nos sinais vitais (saturação de O2 - sessão 1; saturação de O2 - sessão 3; freqüência respiratória - sessão 3 durante a mensagem e na expressão facial, sessão 1, durante a música e a mensagem. Aparentemente a mensagem foi um estímulo mais forte do que a música em relação à capacidade de produzir respostas fisiológicas sugestivas de audição.El objetivo era verificar la influencia de la música y del mensaje verbal en los Señales Vitales y la Expresión Facial de los pacientes en coma fisiológico o inducido. Un Ensayo Clínico Controlado y Randomizado fue echo. La muestra fue consistida en 30 pacientes de Unidad de terapia Intensiva, que fueran divididos en 2 grupos: Grupo Control (sin estímulos auditivos y Grupo Experimental (con los estímulos auditivos. Los pacientes fueran sometidos a las 3 sesiones, en días consecutivos. Los cambios estadísticamente significativos en las Señales Vitales fueran encuentrados (saturación del oxigeno - sesión 1; saturación del oxigeno - sesión 3; frecuencia respiratoria - sesión 3 durante el mensaje y en la Expresión Facial, sesión 1, durante música y el mensaje. Aparentemente el mensaje era uno estimulo más fuerte de qué la música en lo que refiere a la capacidad de producir respuestas fisiológicas de audición.The objective was to check music and voice message influence on the Vital Signals and Facial Expressions of patients in physiological or

  9. Children with mixed language disorder do not discriminate accurately facial identity when expressions change.

    Science.gov (United States)

    Robel, Laurence; Vaivre-Douret, Laurence; Neveu, Xavier; Piana, Hélène; Perier, Antoine; Falissard, Bruno; Golse, Bernard

    2008-12-01

    We investigated the recognition of pairs of faces (same or different facial identities and expressions) in two groups of 14 children aged 6-10 years, with either an expressive language disorder (ELD), or a mixed language disorder (MLD), and two groups of 14 matched healthy controls. When looking at their global performances, children with either expressive (ELD) or MLD have few differences from controls in either face or emotional recognition. At contrary, we found that children with MLD, but not those with ELD, take identical faces to be different if their expressions change. Since children with mixed language disorders are socially more impaired than children with ELD, we think that these features may partly underpin the social difficulties of these children.

  10. Estrogen receptor expression in melasma: results from facial skin of affected patients.

    Science.gov (United States)

    Lieberman, Robert; Moy, Lawrence

    2008-05-01

    Melasma is a commonly acquired hypermelanosis of the skin due to various etiological factors, including pregnancy and oral contraceptives. Estrogen receptor expression in affected skin has not yet been investigated. The purpose of this study was to compare estrogen receptor expression in hyperpigmented and normal facial skin of patients with melasma. Biopsies of 3 mm were taken from affected and unaffected forehead skin of 2 female patients with melasma. Frozen sections of the tissues were obtained and mouse monoclonal antibody against human estrogen receptors was tested at various dilutions to determine the optimum concentrations required for reproducible immunostaining with minimal background staining. Fluorescence was evaluated and compared qualitatively. The immunohistochemical staining of tissue from both patients reflected a qualitative increase in estrogen receptor expression in melasma-affected skin compared to unaffected skin. This study demonstrates the increased expression of estrogen receptors in melasma-affected skin and may establish the basis for exploring topical anti-estrogen therapies in melasma.

  11. Effects of Facial Expression and Language on Trustworthiness and Brain Activities

    Directory of Open Access Journals (Sweden)

    Shu Morioka

    2015-01-01

    Full Text Available Social communication uses verbal and nonverbal language. We examined the degree of trust and brain activity when verbal and facial expressions are incongruent. Fourteen healthy volunteers viewed photographs of 8 people with pleasant (smile or unpleasant expressions (disgust alone or combined with a verbal [positive/negative] expression. As an index for degree of trust, subjects were asked to offer a donation when told that the person in the photograph was troubled financially. Positive emotions and degree of trust were evaluated using the Visual Analogue Scale (VAS. Event-related potentials (ERPs were obtained at 170–240 ms after viewing the photographs. Brain activity during incongruent conditions was localized using standardized Low Resolution Brain Electromagnetic Tomography (sLORETA. VAS scores for positive × smile condition were significantly higher than those for the other conditions (p<0.05. The donation offered was significantly lower for incongruence between verbal and facial expressions, particularly for negative × smile condition. EEG showed more activity in the parietal lobe with incongruent than with congruent conditions. Incongruence [negative × smile] elicited the least positive emotion, degree of trust, and amount of offer. Our results indicate that incongruent sensory information increased activity in the parietal lobe, which may be a basis of mentalizing.

  12. Brain responses to facial expressions of pain: emotional or motor mirroring?

    Science.gov (United States)

    Budell, Lesley; Jackson, Phillip; Rainville, Pierre

    2010-10-15

    The communication of pain requires the perception of pain-related signals and the extraction of their meaning and magnitude to infer the state of the expresser. Here, BOLD responses were measured in healthy volunteers while they evaluated the amount of pain expressed (pain task) or discriminated movements (movement task) in one-second video clips displaying facial expressions of various levels of pain. Regression analysis using subjects' ratings of pain confirmed the parametric response of several regions previously involved in the coding of self-pain, including the anterior cingulate cortex (ACC) and anterior insula (aINS), as well as areas implicated in action observation, and motor mirroring, such as the inferior frontal gyrus (IFG) and inferior parietal lobule (IPL). Furthermore, the pain task produced stronger activation in the ventral IFG, as well as in areas of the medial prefrontal cortex (mPFC) associated with social cognition and emotional mirroring, whereas stronger activation during the movement task predominated in the IPL. These results suggest that perception of the pain of another via facial expression recruits limbic regions involved in the coding of self-pain, prefrontal areas underlying social and emotional cognition (i.e. 'mentalizing'), and premotor and parietal areas involved in motor mirroring. Copyright 2010 Elsevier Inc. All rights reserved.

  13. The telltale face: possible mechanisms behind defector and cooperator recognition revealed by emotional facial expression metrics.

    Science.gov (United States)

    Kovács-Bálint, Zsófia; Bereczkei, Tamás; Hernádi, István

    2013-11-01

    In this study, we investigated the role of facial cues in cooperator and defector recognition. First, a face image database was constructed from pairs of full face portraits of target subjects taken at the moment of decision-making in a prisoner's dilemma game (PDG) and in a preceding neutral task. Image pairs with no deficiencies (n = 67) were standardized for orientation and luminance. Then, confidence in defector and cooperator recognition was tested with image rating in a different group of lay judges (n = 62). Results indicate that (1) defectors were better recognized (58% vs. 47%), (2) they looked different from cooperators (p facial microexpression analysis, defection was strongly linked with depressed lower lips and less opened eyes. Significant correlation was found between the intensity of micromimics and the rating of images in the cooperator-defector dimension. In summary, facial expressions can be considered as reliable indicators of momentary social dispositions in the PDG. Females may exhibit an evolutionary-based overestimation bias to detecting social visual cues of the defector face. © 2012 The British Psychological Society.

  14. Deficits in the mimicry of facial expressions in Parkinson’s disease

    Directory of Open Access Journals (Sweden)

    Steven R. Livingstone

    2016-06-01

    Full Text Available Background: Humans spontaneously mimic the facial expressions of others, facilitating social interaction. This mimicking behaviour may be impaired in individuals with Parkinson’s disease, for whom the loss of facial movements is a clinical feature. Objective: To assess the presence of facial mimicry in patients with Parkinson’s disease.Method: Twenty-seven non-depressed patients with idiopathic Parkinson’s disease and twenty-eight age-matched controls had their facial muscles recorded with electromyography while they observed presentations of calm, happy, sad, angry, and fearful emotions. Results: Patients exhibited reduced amplitude and delayed onset in the zygomaticus major muscle region (smiling response following happy presentations (p < 0.001, ANOVA, patients M = 0.02, 95% confidence interval [-0.15–0.18], controls M = 0.26, [0.14–0.37]. Although patients exhibited activation of the corrugator supercilii and medial frontalis (frowning following sad and fearful presentations, the frontalis response to sad presentations was attenuated relative to controls (p = 0.017, ANOVA, patients, M = .05, [-.08–.18], controls M = .21, [.09–.34]. The amplitude of patients’ zygomaticus activity in response to positive emotions was found to be negatively correlated with response times for ratings of emotional identification, suggesting a motor-behavioral link (r = -0.45, p = 0.02, two-tailed. Conclusions: Patients showed decreased mimicry overall, mimicking other peoples’ frowns to some extent, but presenting with profoundly weakened smiles. These findings open a new avenue of inquiry into the masked face syndrome of PD.

  15. Facial Expressiveness in Infants With and Without Craniofacial Microsomia: Preliminary Findings.

    Science.gov (United States)

    Hammal, Zakia; Cohn, Jeffrey F; Wallace, Erin R; Heike, Carrie L; Birgfeld, Craig B; Oster, Harriet; Speltz, Matthew L

    2018-01-01

    To compare facial expressiveness (FE) of infants with and without craniofacial macrosomia (cases and controls, re