WorldWideScience

Sample records for facial movement detection

  1. Towards emotion detection in educational scenarios from facial expressions and body movements through multimodal approaches.

    Science.gov (United States)

    Saneiro, Mar; Santos, Olga C; Salmeron-Majadas, Sergio; Boticario, Jesus G

    2014-01-01

    We report current findings when considering video recordings of facial expressions and body movements to provide affective personalized support in an educational context from an enriched multimodal emotion detection approach. In particular, we describe an annotation methodology to tag facial expression and body movements that conform to changes in the affective states of learners while dealing with cognitive tasks in a learning process. The ultimate goal is to combine these annotations with additional affective information collected during experimental learning sessions from different sources such as qualitative, self-reported, physiological, and behavioral information. These data altogether are to train data mining algorithms that serve to automatically identify changes in the learners' affective states when dealing with cognitive tasks which help to provide emotional personalized support.

  2. Towards Emotion Detection in Educational Scenarios from Facial Expressions and Body Movements through Multimodal Approaches

    Directory of Open Access Journals (Sweden)

    Mar Saneiro

    2014-01-01

    Full Text Available We report current findings when considering video recordings of facial expressions and body movements to provide affective personalized support in an educational context from an enriched multimodal emotion detection approach. In particular, we describe an annotation methodology to tag facial expression and body movements that conform to changes in the affective states of learners while dealing with cognitive tasks in a learning process. The ultimate goal is to combine these annotations with additional affective information collected during experimental learning sessions from different sources such as qualitative, self-reported, physiological, and behavioral information. These data altogether are to train data mining algorithms that serve to automatically identify changes in the learners’ affective states when dealing with cognitive tasks which help to provide emotional personalized support.

  3. Magnetoencephalographic study on facial movements

    Directory of Open Access Journals (Sweden)

    Kensaku eMiki

    2014-07-01

    Full Text Available In this review, we introduced our three studies that focused on facial movements. In the first study, we examined the temporal characteristics of neural responses elicited by viewing mouth movements, and assessed differences between the responses to mouth opening and closing movements and an averting eyes condition. Our results showed that the occipitotemporal area, the human MT/V5 homologue, was active in the perception of both mouth and eye motions. Viewing mouth and eye movements did not elicit significantly different activity in the occipitotemporal area, which indicated that perception of the movement of facial parts may be processed in the same manner, and this is different from motion in general. In the second study, we investigated whether early activity in the occipitotemporal region evoked by eye movements was influenced by a face contour and/or features such as the mouth. Our results revealed specific information processing for eye movements in the occipitotemporal region, and this activity was significantly influenced by whether movements appeared with the facial contour and/or features, in other words, whether the eyes moved, even if the movement itself was the same. In the third study, we examined the effects of inverting the facial contour (hair and chin and features (eyes, nose, and mouth on processing for static and dynamic face perception. Our results showed the following: (1 In static face perception, activity in the right fusiform area was affected more by the inversion of features while that in the left fusiform area was affected more by a disruption in the spatial relationship between the contour and features, and (2 In dynamic face perception, activity in the right occipitotemporal area was affected by the inversion of the facial contour.

  4. Do facial movements express emotions or communicate motives?

    Science.gov (United States)

    Parkinson, Brian

    2005-01-01

    This article addresses the debate between emotion-expression and motive-communication approaches to facial movements, focusing on Ekman's (1972) and Fridlund's (1994) contrasting models and their historical antecedents. Available evidence suggests that the presence of others either reduces or increases facial responses, depending on the quality and strength of the emotional manipulation and on the nature of the relationship between interactants. Although both display rules and social motives provide viable explanations of audience "inhibition" effects, some audience facilitation effects are less easily accommodated within an emotion-expression perspective. In particular, emotion is not a sufficient condition for a corresponding "expression," even discounting explicit regulation, and, apparently, "spontaneous" facial movements may be facilitated by the presence of others. Further, there is no direct evidence that any particular facial movement provides an unambiguous expression of a specific emotion. However, information communicated by facial movements is not necessarily extrinsic to emotion. Facial movements not only transmit emotion-relevant information but also contribute to ongoing processes of emotional action in accordance with pragmatic theories.

  5. Automatic three-dimensional quantitative analysis for evaluation of facial movement.

    Science.gov (United States)

    Hontanilla, B; Aubá, C

    2008-01-01

    The aim of this study is to present a new 3D capture system of facial movements called FACIAL CLIMA. It is an automatic optical motion system that involves placing special reflecting dots on the subject's face and video recording with three infrared-light cameras the subject performing several face movements such as smile, mouth puckering, eye closure and forehead elevation. Images from the cameras are automatically processed with a software program that generates customised information such as 3D data on velocities and areas. The study has been performed in 20 healthy volunteers. The accuracy of the measurement process and the intrarater and interrater reliabilities have been evaluated. Comparison of a known distance and angle with those obtained by FACIAL CLIMA shows that this system is accurate to within 0.13 mm and 0.41 degrees . In conclusion, the accuracy of the FACIAL CLIMA system for evaluation of facial movements is demonstrated and also the high intrarater and interrater reliability. It has advantages with respect to other systems that have been developed for evaluation of facial movements, such as short calibration time, short measuring time, easiness to use and it provides not only distances but also velocities and areas. Thus the FACIAL CLIMA system could be considered as an adequate tool to assess the outcome of facial paralysis reanimation surgery. Thus, patients with facial paralysis could be compared between surgical centres such that effectiveness of facial reanimation operations could be evaluated.

  6. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Yanchao Dong

    2016-07-01

    Full Text Available The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  7. Facial Muscle Coordination in Monkeys During Rhythmic Facial Expressions and Ingestive Movements

    Science.gov (United States)

    Shepherd, Stephen V.; Lanzilotto, Marco; Ghazanfar, Asif A.

    2012-01-01

    Evolutionary hypotheses regarding the origins of communication signals generally, and primate orofacial communication signals in particular, suggest that these signals derive by ritualization of noncommunicative behaviors, notably including ingestive behaviors such as chewing and nursing. These theories are appealing in part because of the prominent periodicities in both types of behavior. Despite their intuitive appeal, however, there are little or no data with which to evaluate these theories because the coordination of muscles innervated by the facial nucleus has not been carefully compared between communicative and ingestive movements. Such data are especially crucial for reconciling neurophysiological assumptions regarding facial motor control in communication and ingestion. We here address this gap by contrasting the coordination of facial muscles during different types of rhythmic orofacial behavior in macaque monkeys, finding that the perioral muscles innervated by the facial nucleus are rhythmically coordinated during lipsmacks and that this coordination appears distinct from that observed during ingestion. PMID:22553017

  8. The relationship between the changes in three-dimensional facial morphology and mandibular movement after orthognathic surgery.

    Science.gov (United States)

    Kim, Dae-Seung; Huh, Kyung-Hoe; Lee, Sam-Sun; Heo, Min-Suk; Choi, Soon-Chul; Hwang, Soon-Jung; Yi, Won-Jin

    2013-10-01

    The purpose of this study was to investigate the relationship between changes in three-dimensional (3D) facial morphology and mandibular movement after orthognathic surgery. We hypothesized that facial morphology changes after orthognathic surgery exert effects on 3D mandibular movement. We conducted a prospective follow-up study of patients who had undergone orthognathic surgical procedures. Three-dimensional facial morphological values were measured from facial CT images before and three months after orthognathic surgery. Three-dimensional maximum mandibular opening (MMO) values of four points (bilateral condylions, infradentale, and pogonion) were also measured using a mandibular movement tracking and simulation system. The predictor variables were changes in morphological parameters divided into two groups (deviated side (DS) or contralateral side (CS) groups), and the outcome variables were changes in the MMO at four points. We evaluated 21 subjects who had undergone orthognathic surgical procedures. Alterations in the TFH (total facial height), LFH (lower facial height), CS MBL (mandibular body length), and DS RL (ramus length) were negatively correlated with changes in bilateral condylar movement. The UFH, DS MBL and CS ML (mandibular length) showed correlations with infradentale movement. The CS ML, DS ML, MBL, UFH, and SNB were correlated with pogonion movement. The height of the face is most likely to affect post-operative mandibular movement, and is negatively correlated with movement changes in the condyles, infradentale and pogonion. The changes in CS morphological parameters are more correlated with mandibular movement changes than the DS. The changes in CS MBL and bilateral RL were negatively correlated with condylar movement changes, while the bilateral MBL and CS ML were positively correlated with changes in infradentale and pogonion. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights

  9. Effects of Facial Expressions on Recognizing Emotions in Dance Movements

    Directory of Open Access Journals (Sweden)

    Nao Shikanai

    2011-10-01

    Full Text Available Effects of facial expressions on recognizing emotions expressed in dance movements were investigated. Dancers expressed three emotions: joy, sadness, and anger through dance movements. We used digital video cameras and a 3D motion capturing system to record and capture the movements. We then created full-video displays with an expressive face, full-video displays with an unexpressive face, stick figure displays (no face, or point-light displays (no face from these data using 3D animation software. To make point-light displays, 13 markers were attached to the body of each dancer. We examined how accurately observers were able to identify the expression that the dancers intended to create through their dance movements. Dance experienced and inexperienced observers participated in the experiment. They watched the movements and rated the compatibility of each emotion with each movement on a 5-point Likert scale. The results indicated that both experienced and inexperienced observers could identify all the emotions that dancers intended to express. Identification scores for dance movements with an expressive face were higher than for other expressions. This finding indicates that facial expressions affect the identification of emotions in dance movements, whereas only bodily expressions provide sufficient information to recognize emotions.

  10. Rat whisker movement after facial nerve lesion: Evidence for autonomic contraction of skeletal muscle.

    NARCIS (Netherlands)

    Heaton, J.T.; Sheu, S.H.; Hohman, M.H.; Knox, C.J.; Weinberg, J.S.; Kleiss, I.J.; Hadlock, T.A.

    2014-01-01

    Vibrissal whisking is often employed to track facial nerve regeneration in rats; however, we have observed similar degrees of whisking recovery after facial nerve transection with or without repair. We hypothesized that the source of non-facial nerve-mediated whisker movement after chronic

  11. Is empathy necessary to comprehend the emotional faces? The empathic effect on attentional mechanisms (eye movements), cortical correlates (N200 event-related potentials) and facial behaviour (electromyography) in face processing.

    Science.gov (United States)

    Balconi, Michela; Canavesio, Ylenia

    2016-01-01

    The present research explored the effect of social empathy on processing emotional facial expressions. Previous evidence suggested a close relationship between emotional empathy and both the ability to detect facial emotions and the attentional mechanisms involved. A multi-measure approach was adopted: we investigated the association between trait empathy (Balanced Emotional Empathy Scale) and individuals' performance (response times; RTs), attentional mechanisms (eye movements; number and duration of fixations), correlates of cortical activation (event-related potential (ERP) N200 component), and facial responsiveness (facial zygomatic and corrugator activity). Trait empathy was found to affect face detection performance (reduced RTs), attentional processes (more scanning eye movements in specific areas of interest), ERP salience effect (increased N200 amplitude), and electromyographic activity (more facial responses). A second important result was the demonstration of strong, direct correlations among these measures. We suggest that empathy may function as a social facilitator of the processes underlying the detection of facial emotion, and a general "facial response effect" is proposed to explain these results. We assumed that empathy influences cognitive and the facial responsiveness, such that empathic individuals are more skilful in processing facial emotion.

  12. Measurement of facial movements with Photoshop software during treatment of facial nerve palsy*

    Science.gov (United States)

    Pourmomeny, Abbas Ali; Zadmehr, Hassan; Hossaini, Mohsen

    2011-01-01

    BACKGROUND: Evaluating the function of facial nerve is essential in order to determine the influences of various treatment methods. The aim of this study was to evaluate and assess the agreement of Photoshop scaling system versus the facial grading system (FGS). METHODS: In this semi-experimental study, thirty subjects with facial nerve paralysis were recruited. The evaluation of all patients before and after the treatment was performed by FGS and Photoshop measurements. RESULTS: The mean values of FGS before and after the treatment were 35 ± 25 and 67 ± 24, respectively (p Photoshop assessment, mean changes of face expressions in the impaired side relative to the normal side in rest position and three main movements of the face were 3.4 ± 0.55 and 4.04 ± 0.49 millimeter before and after the treatment, respectively (p Photoshop was more objective than using FGS. Therefore, it may be recommended to use this method instead. PMID:22973325

  13. Measurement of facial movements with Photoshop software during treatment of facial nerve palsy.

    Science.gov (United States)

    Pourmomeny, Abbas Ali; Zadmehr, Hassan; Hossaini, Mohsen

    2011-10-01

    Evaluating the function of facial nerve is essential in order to determine the influences of various treatment methods. The aim of this study was to evaluate and assess the agreement of Photoshop scaling system versus the facial grading system (FGS). In this semi-experimental study, thirty subjects with facial nerve paralysis were recruited. The evaluation of all patients before and after the treatment was performed by FGS and Photoshop measurements. The mean values of FGS before and after the treatment were 35 ± 25 and 67 ± 24, respectively (p Photoshop assessment, mean changes of face expressions in the impaired side relative to the normal side in rest position and three main movements of the face were 3.4 ± 0.55 and 4.04 ± 0.49 millimeter before and after the treatment, respectively (p Photoshop was more objective than using FGS. Therefore, it may be recommended to use this method instead.

  14. Toward a universal, automated facial measurement tool in facial reanimation.

    Science.gov (United States)

    Hadlock, Tessa A; Urban, Luke S

    2012-01-01

    To describe a highly quantitative facial function-measuring tool that yields accurate, objective measures of facial position in significantly less time than existing methods. Facial Assessment by Computer Evaluation (FACE) software was designed for facial analysis. Outputs report the static facial landmark positions and dynamic facial movements relevant in facial reanimation. Fifty individuals underwent facial movement analysis using Photoshop-based measurements and the new software; comparisons of agreement and efficiency were made. Comparisons were made between individuals with normal facial animation and patients with paralysis to gauge sensitivity to abnormal movements. Facial measurements were matched using FACE software and Photoshop-based measures at rest and during expressions. The automated assessments required significantly less time than Photoshop-based assessments.FACE measurements easily revealed differences between individuals with normal facial animation and patients with facial paralysis. FACE software produces accurate measurements of facial landmarks and facial movements and is sensitive to paralysis. Given its efficiency, it serves as a useful tool in the clinical setting for zonal facial movement analysis in comprehensive facial nerve rehabilitation programs.

  15. Advances in face detection and facial image analysis

    CERN Document Server

    Celebi, M; Smolka, Bogdan

    2016-01-01

    This book presents the state-of-the-art in face detection and analysis. It outlines new research directions, including in particular psychology-based facial dynamics recognition, aimed at various applications such as behavior analysis, deception detection, and diagnosis of various psychological disorders. Topics of interest include face and facial landmark detection, face recognition, facial expression and emotion analysis, facial dynamics analysis, face classification, identification, and clustering, and gaze direction and head pose estimation, as well as applications of face analysis.

  16. Facial recognition in education system

    Science.gov (United States)

    Krithika, L. B.; Venkatesh, K.; Rathore, S.; Kumar, M. Harish

    2017-11-01

    Human beings exploit emotions comprehensively for conveying messages and their resolution. Emotion detection and face recognition can provide an interface between the individuals and technologies. The most successful applications of recognition analysis are recognition of faces. Many different techniques have been used to recognize the facial expressions and emotion detection handle varying poses. In this paper, we approach an efficient method to recognize the facial expressions to track face points and distances. This can automatically identify observer face movements and face expression in image. This can capture different aspects of emotion and facial expressions.

  17. Automated detection of pain from facial expressions: a rule-based approach using AAM

    Science.gov (United States)

    Chen, Zhanli; Ansari, Rashid; Wilkie, Diana J.

    2012-02-01

    In this paper, we examine the problem of using video analysis to assess pain, an important problem especially for critically ill, non-communicative patients, and people with dementia. We propose and evaluate an automated method to detect the presence of pain manifested in patient videos using a unique and large collection of cancer patient videos captured in patient homes. The method is based on detecting pain-related facial action units defined in the Facial Action Coding System (FACS) that is widely used for objective assessment in pain analysis. In our research, a person-specific Active Appearance Model (AAM) based on Project-Out Inverse Compositional Method is trained for each patient individually for the modeling purpose. A flexible representation of the shape model is used in a rule-based method that is better suited than the more commonly used classifier-based methods for application to the cancer patient videos in which pain-related facial actions occur infrequently and more subtly. The rule-based method relies on the feature points that provide facial action cues and is extracted from the shape vertices of AAM, which have a natural correspondence to face muscular movement. In this paper, we investigate the detection of a commonly used set of pain-related action units in both the upper and lower face. Our detection results show good agreement with the results obtained by three trained FACS coders who independently reviewed and scored the action units in the cancer patient videos.

  18. Facial Mechanosensory Influence on Forelimb Movement in Newborn Opossums, Monodelphis domestica.

    Directory of Open Access Journals (Sweden)

    Marie-Josée Desmarais

    Full Text Available The opossum, Monodelphis domestica, is born very immature but crawls, unaided, with its forelimbs (FL from the mother's birth canal to a nipple where it attaches to pursue its development. What sensory cues guide the newborn to the nipple and trigger its attachment to it? Previous experiments showed that low intensity electrical stimulation of the trigeminal ganglion induces FL movement in in vitro preparations and that trigeminal innervation of the facial skin is well developed in the newborn. The skin does not contain Vater-Pacini or Meissner touch corpuscles at this age, but it contains cells which appear to be Merkel cells (MC. We sought to determine if touch perceived by MC could exert an influence on FL movements. Application of the fluorescent dye AM1-43, which labels sensory cells such as MC, revealed the presence of a large number of labeled cells in the facial epidermis, especially in the snout skin, in newborn opossums. Moreover, calibrated pressure applied to the snout induced bilateral and simultaneous electromyographic responses of the triceps muscle in in vitro preparations of the neuraxis and FL from newborn. These responses increase with stimulation intensity and tend to decrease over time. Removing the facial skin nearly abolished these responses. Metabotropic glutamate 1 receptors being involved in MC neurotransmission, an antagonist of these receptors was applied to the bath, which decreased the EMG responses in a reversible manner. Likewise, bath application of the purinergic type 2 receptors, used by AM1-43 to penetrate sensory cells, also decreased the triceps EMG responses. The combined results support a strong influence of facial mechanosensation on FL movement in newborn opossums, and suggest that this influence could be exerted via MC.

  19. Hypoglossal-Facial Nerve Reconstruction Using a Y-Tube-Conduit Reduces Aberrant Synkinetic Movements of the Orbicularis Oculi and Vibrissal Muscles in Rats

    Directory of Open Access Journals (Sweden)

    Yasemin Kaya

    2014-01-01

    Full Text Available The facial nerve is the most frequently damaged nerve in head and neck trauma. Patients undergoing facial nerve reconstruction often complain about disturbing abnormal synkinetic movements of the facial muscles (mass movements, synkinesis which are thought to result from misguided collateral branching of regenerating motor axons and reinnervation of inappropriate muscles. Here, we examined whether use of an aorta Y-tube conduit during reconstructive surgery after facial nerve injury reduces synkinesis of orbicularis oris (blink reflex and vibrissal (whisking musculature. The abdominal aorta plus its bifurcation was harvested (N = 12 for Y-tube conduits. Animal groups comprised intact animals (Group 1, those receiving hypoglossal-facial nerve end-to-end coaptation alone (HFA; Group 2, and those receiving hypoglossal-facial nerve reconstruction using a Y-tube (HFA-Y-tube, Group 3. Videotape motion analysis at 4 months showed that HFA-Y-tube group showed a reduced synkinesis of eyelid and whisker movements compared to HFA alone.

  20. Hypoglossal-facial nerve reconstruction using a Y-tube-conduit reduces aberrant synkinetic movements of the orbicularis oculi and vibrissal muscles in rats.

    Science.gov (United States)

    Kaya, Yasemin; Ozsoy, Umut; Turhan, Murat; Angelov, Doychin N; Sarikcioglu, Levent

    2014-01-01

    The facial nerve is the most frequently damaged nerve in head and neck trauma. Patients undergoing facial nerve reconstruction often complain about disturbing abnormal synkinetic movements of the facial muscles (mass movements, synkinesis) which are thought to result from misguided collateral branching of regenerating motor axons and reinnervation of inappropriate muscles. Here, we examined whether use of an aorta Y-tube conduit during reconstructive surgery after facial nerve injury reduces synkinesis of orbicularis oris (blink reflex) and vibrissal (whisking) musculature. The abdominal aorta plus its bifurcation was harvested (N = 12) for Y-tube conduits. Animal groups comprised intact animals (Group 1), those receiving hypoglossal-facial nerve end-to-end coaptation alone (HFA; Group 2), and those receiving hypoglossal-facial nerve reconstruction using a Y-tube (HFA-Y-tube, Group 3). Videotape motion analysis at 4 months showed that HFA-Y-tube group showed a reduced synkinesis of eyelid and whisker movements compared to HFA alone.

  1. A Virtual Environment to Improve the Detection of Oral-Facial Malfunction in Children with Cerebral Palsy.

    Science.gov (United States)

    Martín-Ruiz, María-Luisa; Máximo-Bocanegra, Nuria; Luna-Oliva, Laura

    2016-03-26

    The importance of an early rehabilitation process in children with cerebral palsy (CP) is widely recognized. On the one hand, new and useful treatment tools such as rehabilitation systems based on interactive technologies have appeared for rehabilitation of gross motor movements. On the other hand, from the therapeutic point of view, performing rehabilitation exercises with the facial muscles can improve the swallowing process, the facial expression through the management of muscles in the face, and even the speech of children with cerebral palsy. However, it is difficult to find interactive games to improve the detection and evaluation of oral-facial musculature dysfunctions in children with CP. This paper describes a framework based on strategies developed for interactive serious games that is created both for typically developed children and children with disabilities. Four interactive games are the core of a Virtual Environment called SONRIE. This paper demonstrates the benefits of SONRIE to monitor children's oral-facial difficulties. The next steps will focus on the validation of SONRIE to carry out the rehabilitation process of oral-facial musculature in children with cerebral palsy.

  2. Robust facial landmark detection based on initializing multiple poses

    Directory of Open Access Journals (Sweden)

    Xin Chai

    2016-10-01

    Full Text Available For robot systems, robust facial landmark detection is the first and critical step for face-based human identification and facial expression recognition. In recent years, the cascaded-regression-based method has achieved excellent performance in facial landmark detection. Nevertheless, it still has certain weakness, such as high sensitivity to the initialization. To address this problem, regression based on multiple initializations is established in a unified model; face shapes are then estimated independently according to these initializations. With a ranking strategy, the best estimate is selected as the final output. Moreover, a face shape model based on restricted Boltzmann machines is built as a constraint to improve the robustness of ranking. Experiments on three challenging datasets demonstrate the effectiveness of the proposed facial landmark detection method against state-of-the-art methods.

  3. Spoofing detection on facial images recognition using LBP and GLCM combination

    Science.gov (United States)

    Sthevanie, F.; Ramadhani, K. N.

    2018-03-01

    The challenge for the facial based security system is how to detect facial image falsification such as facial image spoofing. Spoofing occurs when someone try to pretend as a registered user to obtain illegal access and gain advantage from the protected system. This research implements facial image spoofing detection method by analyzing image texture. The proposed method for texture analysis combines the Local Binary Pattern (LBP) and Gray Level Co-occurrence Matrix (GLCM) method. The experimental results show that spoofing detection using LBP and GLCM combination achieves high detection rate compared to that of using only LBP feature or GLCM feature.

  4. Emotional facial expression detection in the peripheral visual field.

    Directory of Open Access Journals (Sweden)

    Dimitri J Bayle

    Full Text Available BACKGROUND: In everyday life, signals of danger, such as aversive facial expressions, usually appear in the peripheral visual field. Although facial expression processing in central vision has been extensively studied, this processing in peripheral vision has been poorly studied. METHODOLOGY/PRINCIPAL FINDINGS: Using behavioral measures, we explored the human ability to detect fear and disgust vs. neutral expressions and compared it to the ability to discriminate between genders at eccentricities up to 40°. Responses were faster for the detection of emotion compared to gender. Emotion was detected from fearful faces up to 40° of eccentricity. CONCLUSIONS: Our results demonstrate the human ability to detect facial expressions presented in the far periphery up to 40° of eccentricity. The increasing advantage of emotion compared to gender processing with increasing eccentricity might reflect a major implication of the magnocellular visual pathway in facial expression processing. This advantage may suggest that emotion detection, relative to gender identification, is less impacted by visual acuity and within-face crowding in the periphery. These results are consistent with specific and automatic processing of danger-related information, which may drive attention to those messages and allow for a fast behavioral reaction.

  5. A Virtual Environment to Improve the Detection of Oral-Facial Malfunction in Children with Cerebral Palsy

    Directory of Open Access Journals (Sweden)

    María-Luisa Martín-Ruiz

    2016-03-01

    Full Text Available The importance of an early rehabilitation process in children with cerebral palsy (CP is widely recognized. On the one hand, new and useful treatment tools such as rehabilitation systems based on interactive technologies have appeared for rehabilitation of gross motor movements. On the other hand, from the therapeutic point of view, performing rehabilitation exercises with the facial muscles can improve the swallowing process, the facial expression through the management of muscles in the face, and even the speech of children with cerebral palsy. However, it is difficult to find interactive games to improve the detection and evaluation of oral-facial musculature dysfunctions in children with CP. This paper describes a framework based on strategies developed for interactive serious games that is created both for typically developed children and children with disabilities. Four interactive games are the core of a Virtual Environment called SONRIE. This paper demonstrates the benefits of SONRIE to monitor children’s oral-facial difficulties. The next steps will focus on the validation of SONRIE to carry out the rehabilitation process of oral-facial musculature in children with cerebral palsy.

  6. Recovery of facial expressions using functional electrical stimulation after full-face transplantation.

    Science.gov (United States)

    Topçu, Çağdaş; Uysal, Hilmi; Özkan, Ömer; Özkan, Özlenen; Polat, Övünç; Bedeloğlu, Merve; Akgül, Arzu; Döğer, Ela Naz; Sever, Refik; Çolak, Ömer Halil

    2018-03-06

    We assessed the recovery of 2 face transplantation patients with measures of complexity during neuromuscular rehabilitation. Cognitive rehabilitation methods and functional electrical stimulation were used to improve facial emotional expressions of full-face transplantation patients for 5 months. Rehabilitation and analyses were conducted at approximately 3 years after full facial transplantation in the patient group. We report complexity analysis of surface electromyography signals of these two patients in comparison to the results of 10 healthy individuals. Facial surface electromyography data were collected during 6 basic emotional expressions and 4 primary facial movements from 2 full-face transplantation patients and 10 healthy individuals to determine a strategy of functional electrical stimulation and understand the mechanisms of rehabilitation. A new personalized rehabilitation technique was developed using the wavelet packet method. Rehabilitation sessions were applied twice a month for 5 months. Subsequently, motor and functional progress was assessed by comparing the fuzzy entropy of surface electromyography data against the results obtained from patients before rehabilitation and the mean results obtained from 10 healthy subjects. At the end of personalized rehabilitation, the patient group showed improvements in their facial symmetry and their ability to perform basic facial expressions and primary facial movements. Similarity in the pattern of fuzzy entropy for facial expressions between the patient group and healthy individuals increased. Synkinesis was detected during primary facial movements in the patient group, and one patient showed synkinesis during the happiness expression. Synkinesis in the lower face region of one of the patients was eliminated for the lid tightening movement. The recovery of emotional expressions after personalized rehabilitation was satisfactory to the patients. The assessment with complexity analysis of sEMG data can be

  7. Serotonin transporter gene-linked polymorphism affects detection of facial expressions.

    Directory of Open Access Journals (Sweden)

    Ai Koizumi

    Full Text Available Previous studies have demonstrated that the serotonin transporter gene-linked polymorphic region (5-HTTLPR affects the recognition of facial expressions and attention to them. However, the relationship between 5-HTTLPR and the perceptual detection of others' facial expressions, the process which takes place prior to emotional labeling (i.e., recognition, is not clear. To examine whether the perceptual detection of emotional facial expressions is influenced by the allelic variation (short/long of 5-HTTLPR, happy and sad facial expressions were presented at weak and mid intensities (25% and 50%. Ninety-eight participants, genotyped for 5-HTTLPR, judged whether emotion in images of faces was present. Participants with short alleles showed higher sensitivity (d' to happy than to sad expressions, while participants with long allele(s showed no such positivity advantage. This effect of 5-HTTLPR was found at different facial expression intensities among males and females. The results suggest that at the perceptual stage, a short allele enhances the processing of positive facial expressions rather than that of negative facial expressions.

  8. Automatic Emotional State Detection using Facial Expression Dynamic in Videos

    Directory of Open Access Journals (Sweden)

    Hongying Meng

    2014-11-01

    Full Text Available In this paper, an automatic emotion detection system is built for a computer or machine to detect the emotional state from facial expressions in human computer communication. Firstly, dynamic motion features are extracted from facial expression videos and then advanced machine learning methods for classification and regression are used to predict the emotional states. The system is evaluated on two publicly available datasets, i.e. GEMEP_FERA and AVEC2013, and satisfied performances are achieved in comparison with the baseline results provided. With this emotional state detection capability, a machine can read the facial expression of its user automatically. This technique can be integrated into applications such as smart robots, interactive games and smart surveillance systems.

  9. Impaired detection of happy facial expressions in autism.

    Science.gov (United States)

    Sato, Wataru; Sawada, Reiko; Uono, Shota; Yoshimura, Sayaka; Kochiyama, Takanori; Kubota, Yasutaka; Sakihama, Morimitsu; Toichi, Motomi

    2017-10-17

    The detection of emotional facial expressions plays an indispensable role in social interaction. Psychological studies have shown that typically developing (TD) individuals more rapidly detect emotional expressions than neutral expressions. However, it remains unclear whether individuals with autistic phenotypes, such as autism spectrum disorder (ASD) and high levels of autistic traits (ATs), are impaired in this ability. We examined this by comparing TD and ASD individuals in Experiment 1 and individuals with low and high ATs in Experiment 2 using the visual search paradigm. Participants detected normal facial expressions of anger and happiness and their anti-expressions within crowds of neutral expressions. In Experiment 1, reaction times were shorter for normal angry expressions than for anti-expressions in both TD and ASD groups. This was also the case for normal happy expressions vs. anti-expressions in the TD group but not in the ASD group. Similarly, in Experiment 2, the detection of normal vs. anti-expressions was faster for angry expressions in both groups and for happy expressions in the low, but not high, ATs group. These results suggest that the detection of happy facial expressions is impaired in individuals with ASD and high ATs, which may contribute to their difficulty in creating and maintaining affiliative social relationships.

  10. Rat whisker movement after facial nerve lesion: evidence for autonomic contraction of skeletal muscle.

    Science.gov (United States)

    Heaton, James T; Sheu, Shu Hsien; Hohman, Marc H; Knox, Christopher J; Weinberg, Julie S; Kleiss, Ingrid J; Hadlock, Tessa A

    2014-04-18

    Vibrissal whisking is often employed to track facial nerve regeneration in rats; however, we have observed similar degrees of whisking recovery after facial nerve transection with or without repair. We hypothesized that the source of non-facial nerve-mediated whisker movement after chronic denervation was from autonomic, cholinergic axons traveling within the infraorbital branch of the trigeminal nerve (ION). Rats underwent unilateral facial nerve transection with repair (N=7) or resection without repair (N=11). Post-operative whisking amplitude was measured weekly across 10weeks, and during intraoperative stimulation of the ION and facial nerves at ⩾18weeks. Whisking was also measured after subsequent ION transection (N=6) or pharmacologic blocking of the autonomic ganglia using hexamethonium (N=3), and after snout cooling intended to elicit a vasodilation reflex (N=3). Whisking recovered more quickly and with greater amplitude in rats that underwent facial nerve repair compared to resection (Pfacial-nerve-mediated whisking was elicited by electrical stimulation of the ION, temporarily diminished following hexamethonium injection, abolished by transection of the ION, and rapidly and significantly (Pfacial nerve resection. This study provides the first behavioral and anatomical evidence of spontaneous autonomic innervation of skeletal muscle after motor nerve lesion, which not only has implications for interpreting facial nerve reinnervation results, but also calls into question whether autonomic-mediated innervation of striated muscle occurs naturally in other forms of neuropathy. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  11. The face is not an empty canvas: how facial expressions interact with facial appearance.

    Science.gov (United States)

    Hess, Ursula; Adams, Reginald B; Kleck, Robert E

    2009-12-12

    Faces are not simply blank canvases upon which facial expressions write their emotional messages. In fact, facial appearance and facial movement are both important social signalling systems in their own right. We here provide multiple lines of evidence for the notion that the social signals derived from facial appearance on the one hand and facial movement on the other interact in a complex manner, sometimes reinforcing and sometimes contradicting one another. Faces provide information on who a person is. Sex, age, ethnicity, personality and other characteristics that can define a person and the social group the person belongs to can all be derived from the face alone. The present article argues that faces interact with the perception of emotion expressions because this information informs a decoder's expectations regarding an expresser's probable emotional reactions. Facial appearance also interacts more directly with the interpretation of facial movement because some of the features that are used to derive personality or sex information are also features that closely resemble certain emotional expressions, thereby enhancing or diluting the perceived strength of particular expressions.

  12. Novel Noninvasive Brain Disease Detection System Using a Facial Image Sensor

    Directory of Open Access Journals (Sweden)

    Ting Shu

    2017-12-01

    Full Text Available Brain disease including any conditions or disabilities that affect the brain is fast becoming a leading cause of death. The traditional diagnostic methods of brain disease are time-consuming, inconvenient and non-patient friendly. As more and more individuals undergo examinations to determine if they suffer from any form of brain disease, developing noninvasive, efficient, and patient friendly detection systems will be beneficial. Therefore, in this paper, we propose a novel noninvasive brain disease detection system based on the analysis of facial colors. The system consists of four components. A facial image is first captured through a specialized sensor, where four facial key blocks are next located automatically from the various facial regions. Color features are extracted from each block to form a feature vector for classification via the Probabilistic Collaborative based Classifier. To thoroughly test the system and its performance, seven facial key block combinations were experimented. The best result was achieved using the second facial key block, where it showed that the Probabilistic Collaborative based Classifier is the most suitable. The overall performance of the proposed system achieves an accuracy −95%, a sensitivity −94.33%, a specificity −95.67%, and an average processing time (for one sample of <1 min at brain disease detection.

  13. Novel dynamic Bayesian networks for facial action element recognition and understanding

    Science.gov (United States)

    Zhao, Wei; Park, Jeong-Seon; Choi, Dong-You; Lee, Sang-Woong

    2011-12-01

    In daily life, language is an important tool of communication between people. Besides language, facial action can also provide a great amount of information. Therefore, facial action recognition has become a popular research topic in the field of human-computer interaction (HCI). However, facial action recognition is quite a challenging task due to its complexity. In a literal sense, there are thousands of facial muscular movements, many of which have very subtle differences. Moreover, muscular movements always occur simultaneously when the pose is changed. To address this problem, we first build a fully automatic facial points detection system based on a local Gabor filter bank and principal component analysis. Then, novel dynamic Bayesian networks are proposed to perform facial action recognition using the junction tree algorithm over a limited number of feature points. In order to evaluate the proposed method, we have used the Korean face database for model training. For testing, we used the CUbiC FacePix, facial expressions and emotion database, Japanese female facial expression database, and our own database. Our experimental results clearly demonstrate the feasibility of the proposed approach.

  14. EquiFACS: The Equine Facial Action Coding System.

    Directory of Open Access Journals (Sweden)

    Jen Wathan

    Full Text Available Although previous studies of horses have investigated their facial expressions in specific contexts, e.g. pain, until now there has been no methodology available that documents all the possible facial movements of the horse and provides a way to record all potential facial configurations. This is essential for an objective description of horse facial expressions across a range of contexts that reflect different emotional states. Facial Action Coding Systems (FACS provide a systematic methodology of identifying and coding facial expressions on the basis of underlying facial musculature and muscle movement. FACS are anatomically based and document all possible facial movements rather than a configuration of movements associated with a particular situation. Consequently, FACS can be applied as a tool for a wide range of research questions. We developed FACS for the domestic horse (Equus caballus through anatomical investigation of the underlying musculature and subsequent analysis of naturally occurring behaviour captured on high quality video. Discrete facial movements were identified and described in terms of the underlying muscle contractions, in correspondence with previous FACS systems. The reliability of others to be able to learn this system (EquiFACS and consistently code behavioural sequences was high--and this included people with no previous experience of horses. A wide range of facial movements were identified, including many that are also seen in primates and other domestic animals (dogs and cats. EquiFACS provides a method that can now be used to document the facial movements associated with different social contexts and thus to address questions relevant to understanding social cognition and comparative psychology, as well as informing current veterinary and animal welfare practices.

  15. A Quantitative Assessment of Lip Movements in Different Facial Expressions Through 3-Dimensional on 3-Dimensional Superimposition: A Cross-Sectional Study.

    Science.gov (United States)

    Gibelli, Daniele; Codari, Marina; Pucciarelli, Valentina; Dolci, Claudia; Sforza, Chiarella

    2017-11-23

    The quantitative assessment of facial modifications from mimicry is of relevant interest for the rehabilitation of patients who can no longer produce facial expressions. This study investigated a novel application of 3-dimensional on 3-dimensional superimposition for facial mimicry. This cross-sectional study was based on 10 men 30 to 40 years old who underwent stereophotogrammetry for neutral, happy, sad, and angry expressions. Registration of facial expressions on the neutral expression was performed. Root mean square (RMS) point-to-point distance in the labial area was calculated between each facial expression and the neutral one and was considered the main parameter for assessing facial modifications. In addition, effect size (Cohen d) was calculated to assess the effects of labial movements in relation to facial modifications. All participants were free from possible facial deformities, pathologies, or trauma that could affect facial mimicry. RMS values of facial areas differed significantly among facial expressions (P = .0004 by Friedman test). The widest modifications of the lips were observed in happy expressions (RMS, 4.06 mm; standard deviation [SD], 1.14 mm), with a statistically relevant difference compared with the sad (RMS, 1.42 mm; SD, 1.15 mm) and angry (RMS, 0.76 mm; SD, 0.45 mm) expressions. The effect size of labial versus total face movements was limited for happy and sad expressions and large for the angry expression. This study found that a happy expression provides wider modifications of the lips than the other facial expressions and suggests a novel procedure for assessing regional changes from mimicry. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  16. Automatic Detection of Acromegaly From Facial Photographs Using Machine Learning Methods.

    Science.gov (United States)

    Kong, Xiangyi; Gong, Shun; Su, Lijuan; Howard, Newton; Kong, Yanguo

    2018-01-01

    Automatic early detection of acromegaly is theoretically possible from facial photographs, which can lessen the prevalence and increase the cure probability. In this study, several popular machine learning algorithms were used to train a retrospective development dataset consisting of 527 acromegaly patients and 596 normal subjects. We firstly used OpenCV to detect the face bounding rectangle box, and then cropped and resized it to the same pixel dimensions. From the detected faces, locations of facial landmarks which were the potential clinical indicators were extracted. Frontalization was then adopted to synthesize frontal facing views to improve the performance. Several popular machine learning methods including LM, KNN, SVM, RT, CNN, and EM were used to automatically identify acromegaly from the detected facial photographs, extracted facial landmarks, and synthesized frontal faces. The trained models were evaluated using a separate dataset, of which half were diagnosed as acromegaly by growth hormone suppression test. The best result of our proposed methods showed a PPV of 96%, a NPV of 95%, a sensitivity of 96% and a specificity of 96%. Artificial intelligence can automatically early detect acromegaly with a high sensitivity and specificity. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Continuous emotion detection using EEG signals and facial expressions

    NARCIS (Netherlands)

    Soleymani, Mohammad; Asghari-Esfeden, Sadjad; Pantic, Maja; Fu, Yun

    Emotions play an important role in how we select and consume multimedia. Recent advances on affect detection are focused on detecting emotions continuously. In this paper, for the first time, we continuously detect valence from electroencephalogram (EEG) signals and facial expressions in response to

  18. Binary pattern analysis for 3D facial action unit detection

    NARCIS (Netherlands)

    Sandbach, Georgia; Zafeiriou, Stefanos; Pantic, Maja

    2012-01-01

    In this paper we propose new binary pattern features for use in the problem of 3D facial action unit (AU) detection. Two representations of 3D facial geometries are employed, the depth map and the Azimuthal Projection Distance Image (APDI). To these the traditional Local Binary Pattern is applied,

  19. Is evaluation of humorous stimuli associated with frontal cortex morphology? A pilot study using facial micro-movement analysis and MRI.

    Science.gov (United States)

    Juckel, Georg; Mergl, Roland; Brüne, Martin; Villeneuve, Isabelle; Frodl, Thomas; Schmitt, Gisela; Zetzsche, Thomas; Born, Christine; Hahn, Klaus; Reiser, Maximilian; Möller, Hans-Jürgen; Bär, Karl-Jürgen; Hegerl, Ulrich; Meisenzahl, Eva Maria

    2011-05-01

    Humour involves the ability to detect incongruous ideas violating social rules and norms. Accordingly, humour requires a complex array of cognitive skills for which intact frontal lobe functioning is critical. Here, we sought to examine the association of facial expression during an emotion inducing experiment with frontal cortex morphology in healthy subjects. Thirty-one healthy male subjects (mean age: 30.8±8.9 years; all right-handers) watching a humorous movie ("Mr. Bean") were investigated. Markers fixed at certain points of the face emitting high-frequency ultrasonic signals allowed direct measurement of facial movements with high spatial-temporal resolution. Magnetic resonance images of the frontal cortex were obtained with a 1.5-T Magnetom using a coronar T2- and protondensity-weighted Dual-Echo-Sequence and a 3D-magnetization-prepared rapid gradient echo (MPRAGE) sequence. Volumetric analysis was performed using BRAINS. Frontal cortex volume was partly associated with slower speed of "laughing" movements of the eyes ("genuine" or Duchenne smile). Specifically, grey matter volume was associated with longer emotional reaction time ipsilaterally, even when controlled for age and daily alcohol intake. These results lend support to the hypothesis that superior cognitive evaluation of humorous stimuli - mediated by larger prefrontal grey and white matter volume - leads to a measurable reduction of speed of emotional expressivity in normal adults. Copyright © 2010 Elsevier Srl. All rights reserved.

  20. Automatic change detection to facial expressions in adolescents

    DEFF Research Database (Denmark)

    Liu, Tongran; Xiao, Tong; Jiannong, Shi

    2016-01-01

    Adolescence is a critical period for the neurodevelopment of social-emotional processing, wherein the automatic detection of changes in facial expressions is crucial for the development of interpersonal communication. Two groups of participants (an adolescent group and an adult group) were...... in facial expressions between the two age groups. The current findings demonstrated that the adolescent group featured more negative vMMN amplitudes than the adult group in the fronto-central region during the 120–200 ms interval. During the time window of 370–450 ms, only the adult group showed better...... automatic processing on fearful faces than happy faces. The present study indicated that adolescent’s posses stronger automatic detection of changes in emotional expression relative to adults, and sheds light on the neurodevelopment of automatic processes concerning social-emotional information....

  1. Subject independent facial expression recognition with robust face detection using a convolutional neural network.

    Science.gov (United States)

    Matsugu, Masakazu; Mori, Katsuhiko; Mitari, Yusuke; Kaneda, Yuji

    2003-01-01

    Reliable detection of ordinary facial expressions (e.g. smile) despite the variability among individuals as well as face appearance is an important step toward the realization of perceptual user interface with autonomous perception of persons. We describe a rule-based algorithm for robust facial expression recognition combined with robust face detection using a convolutional neural network. In this study, we address the problem of subject independence as well as translation, rotation, and scale invariance in the recognition of facial expression. The result shows reliable detection of smiles with recognition rate of 97.6% for 5600 still images of more than 10 subjects. The proposed algorithm demonstrated the ability to discriminate smiling from talking based on the saliency score obtained from voting visual cues. To the best of our knowledge, it is the first facial expression recognition model with the property of subject independence combined with robustness to variability in facial appearance.

  2. Dissociation between recognition and detection advantage for facial expressions: a meta-analysis.

    Science.gov (United States)

    Nummenmaa, Lauri; Calvo, Manuel G

    2015-04-01

    Happy facial expressions are recognized faster and more accurately than other expressions in categorization tasks, whereas detection in visual search tasks is widely believed to be faster for angry than happy faces. We used meta-analytic techniques for resolving this categorization versus detection advantage discrepancy for positive versus negative facial expressions. Effect sizes were computed on the basis of the r statistic for a total of 34 recognition studies with 3,561 participants and 37 visual search studies with 2,455 participants, yielding a total of 41 effect sizes for recognition accuracy, 25 for recognition speed, and 125 for visual search speed. Random effects meta-analysis was conducted to estimate effect sizes at population level. For recognition tasks, an advantage in recognition accuracy and speed for happy expressions was found for all stimulus types. In contrast, for visual search tasks, moderator analysis revealed that a happy face detection advantage was restricted to photographic faces, whereas a clear angry face advantage was found for schematic and "smiley" faces. Robust detection advantage for nonhappy faces was observed even when stimulus emotionality was distorted by inversion or rearrangement of the facial features, suggesting that visual features primarily drive the search. We conclude that the recognition advantage for happy faces is a genuine phenomenon related to processing of facial expression category and affective valence. In contrast, detection advantages toward either happy (photographic stimuli) or nonhappy (schematic) faces is contingent on visual stimulus features rather than facial expression, and may not involve categorical or affective processing. (c) 2015 APA, all rights reserved).

  3. Subthalamic nucleus detects unnatural android movement.

    Science.gov (United States)

    Ikeda, Takashi; Hirata, Masayuki; Kasaki, Masashi; Alimardani, Maryam; Matsushita, Kojiro; Yamamoto, Tomoyuki; Nishio, Shuichi; Ishiguro, Hiroshi

    2017-12-19

    An android, i.e., a realistic humanoid robot with human-like capabilities, may induce an uncanny feeling in human observers. The uncanny feeling about an android has two main causes: its appearance and movement. The uncanny feeling about an android increases when its appearance is almost human-like but its movement is not fully natural or comparable to human movement. Even if an android has human-like flexible joints, its slightly jerky movements cause a human observer to detect subtle unnaturalness in them. However, the neural mechanism underlying the detection of unnatural movements remains unclear. We conducted an fMRI experiment to compare the observation of an android and the observation of a human on which the android is modelled, and we found differences in the activation pattern of the brain regions that are responsible for the production of smooth and natural movement. More specifically, we found that the visual observation of the android, compared with that of the human model, caused greater activation in the subthalamic nucleus (STN). When the android's slightly jerky movements are visually observed, the STN detects their subtle unnaturalness. This finding suggests that the detection of unnatural movements is attributed to an error signal resulting from a mismatch between a visual input and an internal model for smooth movement.

  4. The Emotional Modulation of Facial Mimicry: A Kinematic Study

    Directory of Open Access Journals (Sweden)

    Antonella Tramacere

    2018-01-01

    Full Text Available It is well-established that the observation of emotional facial expression induces facial mimicry responses in the observers. However, how the interaction between emotional and motor components of facial expressions can modulate the motor behavior of the perceiver is still unknown. We have developed a kinematic experiment to evaluate the effect of different oro-facial expressions on perceiver's face movements. Participants were asked to perform two movements, i.e., lip stretching and lip protrusion, in response to the observation of four meaningful (i.e., smile, angry-mouth, kiss, and spit and two meaningless mouth gestures. All the stimuli were characterized by different motor patterns (mouth aperture or mouth closure. Response Times and kinematics parameters of the movements (amplitude, duration, and mean velocity were recorded and analyzed. Results evidenced a dissociated effect on reaction times and movement kinematics. We found shorter reaction time when a mouth movement was preceded by the observation of a meaningful and motorically congruent oro-facial gesture, in line with facial mimicry effect. On the contrary, during execution, the perception of smile was associated with the facilitation, in terms of shorter duration and higher velocity of the incongruent movement, i.e., lip protrusion. The same effect resulted in response to kiss and spit that significantly facilitated the execution of lip stretching. We called this phenomenon facial mimicry reversal effect, intended as the overturning of the effect normally observed during facial mimicry. In general, the findings show that both motor features and types of emotional oro-facial gestures (conveying positive or negative valence affect the kinematics of subsequent mouth movements at different levels: while congruent motor features facilitate a general motor response, motor execution could be speeded by gestures that are motorically incongruent with the observed one. Moreover, valence

  5. The Emotional Modulation of Facial Mimicry: A Kinematic Study.

    Science.gov (United States)

    Tramacere, Antonella; Ferrari, Pier F; Gentilucci, Maurizio; Giuffrida, Valeria; De Marco, Doriana

    2017-01-01

    It is well-established that the observation of emotional facial expression induces facial mimicry responses in the observers. However, how the interaction between emotional and motor components of facial expressions can modulate the motor behavior of the perceiver is still unknown. We have developed a kinematic experiment to evaluate the effect of different oro-facial expressions on perceiver's face movements. Participants were asked to perform two movements, i.e., lip stretching and lip protrusion, in response to the observation of four meaningful (i.e., smile, angry-mouth, kiss, and spit) and two meaningless mouth gestures. All the stimuli were characterized by different motor patterns (mouth aperture or mouth closure). Response Times and kinematics parameters of the movements (amplitude, duration, and mean velocity) were recorded and analyzed. Results evidenced a dissociated effect on reaction times and movement kinematics. We found shorter reaction time when a mouth movement was preceded by the observation of a meaningful and motorically congruent oro-facial gesture, in line with facial mimicry effect. On the contrary, during execution, the perception of smile was associated with the facilitation, in terms of shorter duration and higher velocity of the incongruent movement, i.e., lip protrusion. The same effect resulted in response to kiss and spit that significantly facilitated the execution of lip stretching. We called this phenomenon facial mimicry reversal effect , intended as the overturning of the effect normally observed during facial mimicry. In general, the findings show that both motor features and types of emotional oro-facial gestures (conveying positive or negative valence) affect the kinematics of subsequent mouth movements at different levels: while congruent motor features facilitate a general motor response, motor execution could be speeded by gestures that are motorically incongruent with the observed one. Moreover, valence effect depends on

  6. Impact of individually controlled facially applied air movement on perceived air quality at high humidity

    Energy Technology Data Exchange (ETDEWEB)

    Skwarczynski, M.A. [Faculty of Environmental Engineering, Institute of Environmental Protection Engineering, Department of Indoor Environment Engineering, Lublin University of Technology, Lublin (Poland); International Centre for Indoor Environment and Energy, Department of Civil Engineering, Technical University of Denmark, Copenhagen (Denmark); Melikov, A.K.; Lyubenova, V. [International Centre for Indoor Environment and Energy, Department of Civil Engineering, Technical University of Denmark, Copenhagen (Denmark); Kaczmarczyk, J. [Faculty of Energy and Environmental Engineering, Department of Heating, Ventilation and Dust Removal Technology, Silesian University of Technology, Gliwice (Poland)

    2010-10-15

    The effect of facially applied air movement on perceived air quality (PAQ) at high humidity was studied. Thirty subjects (21 males and 9 females) participated in three, 3-h experiments performed in a climate chamber. The experimental conditions covered three combinations of relative humidity and local air velocity under a constant air temperature of 26 C, namely: 70% relative humidity without air movement, 30% relative humidity without air movement and 70% relative humidity with air movement under isothermal conditions. Personalized ventilation was used to supply room air from the front toward the upper part of the body (upper chest, head). The subjects could control the flow rate (velocity) of the supplied air in the vicinity of their bodies. The results indicate an airflow with elevated velocity applied to the face significantly improves the acceptability of the air quality at the room air temperature of 26 C and relative humidity of 70%. (author)

  7. Detection of movement intention from single-trial movement-related cortical potentials

    Science.gov (United States)

    Niazi, Imran Khan; Jiang, Ning; Tiberghien, Olivier; Feldbæk Nielsen, Jørgen; Dremstrup, Kim; Farina, Dario

    2011-10-01

    Detection of movement intention from neural signals combined with assistive technologies may be used for effective neurofeedback in rehabilitation. In order to promote plasticity, a causal relation between intended actions (detected for example from the EEG) and the corresponding feedback should be established. This requires reliable detection of motor intentions. In this study, we propose a method to detect movements from EEG with limited latency. In a self-paced asynchronous BCI paradigm, the initial negative phase of the movement-related cortical potentials (MRCPs), extracted from multi-channel scalp EEG was used to detect motor execution/imagination in healthy subjects and stroke patients. For MRCP detection, it was demonstrated that a new optimized spatial filtering technique led to better accuracy than a large Laplacian spatial filter and common spatial pattern. With the optimized spatial filter, the true positive rate (TPR) for detection of movement execution in healthy subjects (n = 15) was 82.5 ± 7.8%, with latency of -66.6 ± 121 ms. Although TPR decreased with motor imagination in healthy subject (n = 10, 64.5 ± 5.33%) and with attempted movements in stroke patients (n = 5, 55.01 ± 12.01%), the results are promising for the application of this approach to provide patient-driven real-time neurofeedback.

  8. Automatic facial pore analysis system using multi-scale pore detection.

    Science.gov (United States)

    Sun, J Y; Kim, S W; Lee, S H; Choi, J E; Ko, S J

    2017-08-01

    As facial pore widening and its treatments have become common concerns in the beauty care field, the necessity for an objective pore-analyzing system has been increased. Conventional apparatuses lack in usability requiring strong light sources and a cumbersome photographing process, and they often yield unsatisfactory analysis results. This study was conducted to develop an image processing technique for automatic facial pore analysis. The proposed method detects facial pores using multi-scale detection and optimal scale selection scheme and then extracts pore-related features such as total area, average size, depth, and the number of pores. Facial photographs of 50 subjects were graded by two expert dermatologists, and correlation analyses between the features and clinical grading were conducted. We also compared our analysis result with those of conventional pore-analyzing devices. The number of large pores and the average pore size were highly correlated with the severity of pore enlargement. In comparison with the conventional devices, the proposed analysis system achieved better performance showing stronger correlation with the clinical grading. The proposed system is highly accurate and reliable for measuring the severity of skin pore enlargement. It can be suitably used for objective assessment of the pore tightening treatments. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. A Practice Indexes for Improving Facial Movements of Brass Instrument Players

    Science.gov (United States)

    Ito, Kyoko; Hirano, Takeshi; Noto, Kazufumi; Nishida, Shogo; Ohtsuki, Tatsuyuki

    Two experimental studies have been conducted in order to propose practice indexes for the improvement of the embouchure of French horn players, two experimental studies have been conducted. In both studies, the same task was performed by advanced and amateur French horn players. The first study investigated the activity, while performing the above-mentioned task, of the 5 facial muscles (levator labii superioris, zygomaticus major, depressor anguli oris, depressor labii inferioris, and risorius muscles) on the right side of the face by surface electromyography, and the facial movement on the left side of the face by attaching two markers above each muscle and using two high-speed cameras simultaneously. The results of the study showed that it is possible for the four markers around the lower lip to practice indexes. The second study evaluated whether the above-mentioned markers are appropriate as practice indexes using a 3-D tracking system and questionnaires. The results showed that both the advanced and the amateur players assessed that the markers were suitable as practice indexes for improving the embouchure. This set of approaches could be useful for selecting practice indexes and developing scientific practice methods not only for the French horn but also for other instruments and other fields.

  10. Facial Expression Recognition

    NARCIS (Netherlands)

    Pantic, Maja; Li, S.; Jain, A.

    2009-01-01

    Facial expression recognition is a process performed by humans or computers, which consists of: 1. Locating faces in the scene (e.g., in an image; this step is also referred to as face detection), 2. Extracting facial features from the detected face region (e.g., detecting the shape of facial

  11. Facial feature tracking: a psychophysiological measure to assess exercise intensity?

    Science.gov (United States)

    Miles, Kathleen H; Clark, Bradley; Périard, Julien D; Goecke, Roland; Thompson, Kevin G

    2018-04-01

    The primary aim of this study was to determine whether facial feature tracking reliably measures changes in facial movement across varying exercise intensities. Fifteen cyclists completed three, incremental intensity, cycling trials to exhaustion while their faces were recorded with video cameras. Facial feature tracking was found to be a moderately reliable measure of facial movement during incremental intensity cycling (intra-class correlation coefficient = 0.65-0.68). Facial movement (whole face (WF), upper face (UF), lower face (LF) and head movement (HM)) increased with exercise intensity, from lactate threshold one (LT1) until attainment of maximal aerobic power (MAP) (WF 3464 ± 3364mm, P exercise intensities (UF minus LF at: LT1, 1048 ± 383mm; LT2, 1208 ± 611mm; MAP, 1401 ± 712mm; P exercise intensity.

  12. A Virtual Environment to Improve the Detection of Oral-Facial Malfunction in Children with Cerebral Palsy

    OpenAIRE

    María-Luisa Martín-Ruiz; Nuria Máximo-Bocanegra; Laura Luna-Oliva

    2016-01-01

    The importance of an early rehabilitation process in children with cerebral palsy (CP) is widely recognized. On the one hand, new and useful treatment tools such as rehabilitation systems based on interactive technologies have appeared for rehabilitation of gross motor movements. On the other hand, from the therapeutic point of view, performing rehabilitation exercises with the facial muscles can improve the swallowing process, the facial expression through the management of muscles in the fa...

  13. Slowing down Presentation of Facial Movements and Vocal Sounds Enhances Facial Expression Recognition and Induces Facial-Vocal Imitation in Children with Autism

    Science.gov (United States)

    Tardif, Carole; Laine, France; Rodriguez, Melissa; Gepner, Bruno

    2007-01-01

    This study examined the effects of slowing down presentation of facial expressions and their corresponding vocal sounds on facial expression recognition and facial and/or vocal imitation in children with autism. Twelve autistic children and twenty-four normal control children were presented with emotional and non-emotional facial expressions on…

  14. Singing emotionally: A study of pre-production, production, and post-production facial expressions

    Directory of Open Access Journals (Sweden)

    Lena Rachel Quinto

    2014-04-01

    Full Text Available Singing involves vocal production accompanied by a dynamic and meaningful use of facial expressions, which may serve as ancillary gestures that complement, disambiguate, or reinforce the acoustic signal. In this investigation, we examined the use of facial movements to communicate emotion, focusing on movements arising in three epochs: before vocalisation (pre-production, during vocalisation (production, and immediately after vocalisation (post-production. The stimuli were recordings of seven vocalists’ facial movements as they sang short (14 syllable melodic phrases with the intention of communicating happiness, sadness, irritation, or no emotion. Facial movements were presented as point-light displays to 16 observers who judged the emotion conveyed. Experiment 1 revealed that the accuracy of emotional judgement varied with singer, emotion and epoch. Accuracy was highest in the production epoch, however, happiness was well communicated in the pre-production epoch. In Experiment 2, observers judged point-light displays of exaggerated movements. The ratings suggested that the extent of facial and head movements is largely perceived as a gauge of emotional arousal. In Experiment 3, observers rated point-light displays of scrambled movements. Configural information was removed in these stimuli but velocity and acceleration were retained. Exaggerated scrambled movements were likely to be associated with happiness or irritation whereas unexaggerated scrambled movements were more likely to be identified as neutral. An analysis of the motions of singers revealed systematic changes in facial movement as a function of the emotional intentions of singers. The findings confirm the central role of facial expressions in vocal emotional communication, and highlight individual differences between singers in the amount and intelligibility of facial movements made before, during, and after vocalization.

  15. Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.

    Science.gov (United States)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2016-10-05

    Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in specific patterns, and rarely in isolation. Yet, most existing methods for automatic action unit detection fail to exploit dependencies among them, and the corresponding facial features. To address this, we propose a novel multi-conditional latent variable model for simultaneous fusion of facial features and joint action unit detection. Specifically, the proposed model performs feature fusion in a generative fashion via a low-dimensional shared subspace, while simultaneously performing action unit detection using a discriminative classification approach. We show that by combining the merits of both approaches, the proposed methodology outperforms existing purely discriminative/generative methods for the target task. To reduce the number of parameters, and avoid overfitting, a novel Bayesian learning approach based on Monte Carlo sampling is proposed, to integrate out the shared subspace. We validate the proposed method on posed and spontaneous data from three publicly available datasets (CK+, DISFA and Shoulder-pain), and show that both feature fusion and joint learning of action units leads to improved performance compared to the state-of-the-art methods for the task.

  16. Posed versus spontaneous facial expressions are modulated by opposite cerebral hemispheres.

    Science.gov (United States)

    Ross, Elliott D; Pulusu, Vinay K

    2013-05-01

    Clinical research has indicated that the left face is more expressive than the right face, suggesting that modulation of facial expressions is lateralized to the right hemisphere. The findings, however, are controversial because the results explain, on average, approximately 4% of the data variance. Using high-speed videography, we sought to determine if movement-onset asymmetry was a more powerful research paradigm than terminal movement asymmetry. The results were very robust, explaining up to 70% of the data variance. Posed expressions began overwhelmingly on the right face whereas spontaneous expressions began overwhelmingly on the left face. This dichotomy was most robust for upper facial expressions. In addition, movement-onset asymmetries did not predict terminal movement asymmetries, which were not significantly lateralized. The results support recent neuroanatomic observations that upper versus lower facial movements have different forebrain motor representations and recent behavioral constructs that posed versus spontaneous facial expressions are modulated preferentially by opposite cerebral hemispheres and that spontaneous facial expressions are graded rather than non-graded movements. Published by Elsevier Ltd.

  17. Beauty hinders attention switch in change detection: the role of facial attractiveness and distinctiveness.

    Directory of Open Access Journals (Sweden)

    Wenfeng Chen

    Full Text Available BACKGROUND: Recent research has shown that the presence of a task-irrelevant attractive face can induce a transient diversion of attention from a perceptual task that requires covert deployment of attention to one of the two locations. However, it is not known whether this spontaneous appraisal for facial beauty also modulates attention in change detection among multiple locations, where a slower, and more controlled search process is simultaneously affected by the magnitude of a change and the facial distinctiveness. Using the flicker paradigm, this study examines how spontaneous appraisal for facial beauty affects the detection of identity change among multiple faces. METHODOLOGY/PRINCIPAL FINDINGS: Participants viewed a display consisting of two alternating frames of four faces separated by a blank frame. In half of the trials, one of the faces (target face changed to a different person. The task of the participant was to indicate whether a change of face identity had occurred. The results showed that (1 observers were less efficient at detecting identity change among multiple attractive faces relative to unattractive faces when the target and distractor faces were not highly distinctive from one another; and (2 it is difficult to detect a change if the new face is similar to the old. CONCLUSIONS/SIGNIFICANCE: The findings suggest that attractive faces may interfere with the attention-switch process in change detection. The results also show that attention in change detection was strongly modulated by physical similarity between the alternating faces. Although facial beauty is a powerful stimulus that has well-demonstrated priority, its influence on change detection is easily superseded by low-level image similarity. The visual system appears to take a different approach to facial beauty when a task requires resource-demanding feature comparisons.

  18. Beauty hinders attention switch in change detection: the role of facial attractiveness and distinctiveness.

    Science.gov (United States)

    Chen, Wenfeng; Liu, Chang Hong; Nakabayashi, Kazuyo

    2012-01-01

    Recent research has shown that the presence of a task-irrelevant attractive face can induce a transient diversion of attention from a perceptual task that requires covert deployment of attention to one of the two locations. However, it is not known whether this spontaneous appraisal for facial beauty also modulates attention in change detection among multiple locations, where a slower, and more controlled search process is simultaneously affected by the magnitude of a change and the facial distinctiveness. Using the flicker paradigm, this study examines how spontaneous appraisal for facial beauty affects the detection of identity change among multiple faces. Participants viewed a display consisting of two alternating frames of four faces separated by a blank frame. In half of the trials, one of the faces (target face) changed to a different person. The task of the participant was to indicate whether a change of face identity had occurred. The results showed that (1) observers were less efficient at detecting identity change among multiple attractive faces relative to unattractive faces when the target and distractor faces were not highly distinctive from one another; and (2) it is difficult to detect a change if the new face is similar to the old. The findings suggest that attractive faces may interfere with the attention-switch process in change detection. The results also show that attention in change detection was strongly modulated by physical similarity between the alternating faces. Although facial beauty is a powerful stimulus that has well-demonstrated priority, its influence on change detection is easily superseded by low-level image similarity. The visual system appears to take a different approach to facial beauty when a task requires resource-demanding feature comparisons.

  19. 3D Face Model Dataset: Automatic Detection of Facial Expressions and Emotions for Educational Environments

    Science.gov (United States)

    Chickerur, Satyadhyan; Joshi, Kartik

    2015-01-01

    Emotion detection using facial images is a technique that researchers have been using for the last two decades to try to analyze a person's emotional state given his/her image. Detection of various kinds of emotion using facial expressions of students in educational environment is useful in providing insight into the effectiveness of tutoring…

  20. Research on driver fatigue detection

    Science.gov (United States)

    Zhang, Ting; Chen, Zhong; Ouyang, Chao

    2018-03-01

    Driver fatigue is one of the main causes of frequent traffic accidents. In this case, driver fatigue detection system has very important significance in avoiding traffic accidents. This paper presents a real-time method based on fusion of multiple facial features, including eye closure, yawn and head movement. The eye state is classified as being open or closed by a linear SVM classifier trained using HOG features of the detected eye. The mouth state is determined according to the width-height ratio of the mouth. The head movement is detected by head pitch angle calculated by facial landmark. The driver's fatigue state can be reasoned by the model trained by above features. According to experimental results, drive fatigue detection obtains an excellent performance. It indicates that the developed method is valuable for the application of avoiding traffic accidents caused by driver's fatigue.

  1. Towards Real-Time Facial Landmark Detection in Depth Data Using Auxiliary Information

    Directory of Open Access Journals (Sweden)

    Connah Kendrick

    2018-06-01

    Full Text Available Modern facial motion capture systems employ a two-pronged approach for capturing and rendering facial motion. Visual data (2D is used for tracking the facial features and predicting facial expression, whereas Depth (3D data is used to build a series of expressions on 3D face models. An issue with modern research approaches is the use of a single data stream that provides little indication of the 3D facial structure. We compare and analyse the performance of Convolutional Neural Networks (CNN using visual, Depth and merged data to identify facial features in real-time using a Depth sensor. First, we review the facial landmarking algorithms and its datasets for Depth data. We address the limitation of the current datasets by introducing the Kinect One Expression Dataset (KOED. Then, we propose the use of CNNs for the single data stream and merged data streams for facial landmark detection. We contribute to existing work by performing a full evaluation on which streams are the most effective for the field of facial landmarking. Furthermore, we improve upon the existing work by extending neural networks to predict into 3D landmarks in real-time with additional observations on the impact of using 2D landmarks as auxiliary information. We evaluate the performance by using Mean Square Error (MSE and Mean Average Error (MAE. We observe that the single data stream predicts accurate facial landmarks on Depth data when auxiliary information is used to train the network. The codes and dataset used in this paper will be made available.

  2. Face detection and facial feature localization using notch based templates

    International Nuclear Information System (INIS)

    Qayyum, U.

    2007-01-01

    We present a real time detection off aces from the video with facial feature localization as well as the algorithm capable of differentiating between the face/non-face patterns. The need of face detection and facial feature localization arises in various application of computer vision, so a lot of research is dedicated to come up with a real time solution. The algorithm should remain simple to perform real time whereas it should not compromise on the challenges encountered during the detection and localization phase, keeping simplicity and all challenges i.e. algorithm invariant to scale, translation, and (+-45) rotation transformations. The proposed system contains two parts. Visual guidance and face/non-face classification. The visual guidance phase uses the fusion of motion and color cues to classify skin color. Morphological operation with union-structure component labeling algorithm extracts contiguous regions. Scale normalization is applied by nearest neighbor interpolation method to avoid the effect of different scales. Using the aspect ratio of width and height size. Region of Interest (ROI) is obtained and then passed to face/non-face classifier. Notch (Gaussian) based templates/ filters are used to find circular darker regions in ROI. The classified face region is handed over to facial feature localization phase, which uses YCbCr eyes/lips mask for face feature localization. The empirical results show an accuracy of 90% for five different videos with 1000 face/non-face patterns and processing rate of proposed algorithm is 15 frames/sec. (author)

  3. An Innovative Serious Game for the Detection and Rehabilitation of Oral-Facial Malfunction in Children: A Pilot Study

    Directory of Open Access Journals (Sweden)

    Nuria Máximo-Bocanegra

    2017-01-01

    Full Text Available We present SONRIE, a serious game based on virtual reality and comprising four games which act as tests where children must perform gestures in order to progress through several screens (raising eyebrows, kissing, blowing, and smiling. The aims of this pilot study were to evaluate the overall acceptance of the game and the capacity for detecting anomalies in motor execution and, lastly, to establish motor control benchmarks in orofacial muscles. For this purpose, tests were performed in school settings with 96 typically developing children aged between five and seven years. Regarding the different games, in the kissing game, children were able to execute the correct movement at six years of age and a precise movement at the age of seven years. Blowing actions required more maturity, starting from the age of five and achievable by the age of six years. The smiling game was performed correctly among all ages evaluated. The percentage of children who mastered this gesture with both precision and speed was progressively greater reaching more than 75% of values above 100 for children aged seven years. SONRIE was accepted enthusiastically among the population under study. In the future, SONRIE could be used as a tool for detecting difficulties regarding self-control and for influencing performance and the ability to produce fine-tuned facial movements.

  4. Recognizing Facial Expressions Automatically from Video

    Science.gov (United States)

    Shan, Caifeng; Braspenning, Ralph

    Facial expressions, resulting from movements of the facial muscles, are the face changes in response to a person's internal emotional states, intentions, or social communications. There is a considerable history associated with the study on facial expressions. Darwin [22] was the first to describe in details the specific facial expressions associated with emotions in animals and humans, who argued that all mammals show emotions reliably in their faces. Since that, facial expression analysis has been a area of great research interest for behavioral scientists [27]. Psychological studies [48, 3] suggest that facial expressions, as the main mode for nonverbal communication, play a vital role in human face-to-face communication. For illustration, we show some examples of facial expressions in Fig. 1.

  5. Facial Curvature Detects and Explicates Ethnic Differences in Effects of Prenatal Alcohol Exposure.

    Science.gov (United States)

    Suttie, Michael; Wetherill, Leah; Jacobson, Sandra W; Jacobson, Joseph L; Hoyme, H Eugene; Sowell, Elizabeth R; Coles, Claire; Wozniak, Jeffrey R; Riley, Edward P; Jones, Kenneth L; Foroud, Tatiana; Hammond, Peter

    2017-08-01

    Our objective is to help clinicians detect the facial effects of prenatal alcohol exposure by developing computer-based tools for screening facial form. All 415 individuals considered were evaluated by expert dysmorphologists and categorized as (i) healthy control (HC), (ii) fetal alcohol syndrome (FAS), or (iii) heavily prenatally alcohol exposed (HE) but not clinically diagnosable as FAS; 3D facial photographs were used to build models of facial form to support discrimination studies. Surface curvature-based delineations of facial form were introduced. (i) Facial growth in FAS, HE, and control subgroups is similar in both cohorts. (ii) Cohort consistency of agreement between clinical diagnosis and HC-FAS facial form classification is lower for midline facial regions and higher for nonmidline regions. (iii) Specific HC-FAS differences within and between the cohorts include: for HC, a smoother philtrum in Cape Coloured individuals; for FAS, a smoother philtrum in Caucasians; for control-FAS philtrum difference, greater homogeneity in Caucasians; for control-FAS face difference, greater homogeneity in Cape Coloured individuals. (iv) Curvature changes in facial profile induced by prenatal alcohol exposure are more homogeneous and greater in Cape Coloureds than in Caucasians. (v) The Caucasian HE subset divides into clusters with control-like and FAS-like facial dysmorphism. The Cape Coloured HE subset is similarly divided for nonmidline facial regions but not clearly for midline structures. (vi) The Cape Coloured HE subset with control-like facial dysmorphism shows orbital hypertelorism. Facial curvature assists the recognition of the effects of prenatal alcohol exposure and helps explain why different facial regions result in inconsistent control-FAS discrimination rates in disparate ethnic groups. Heavy prenatal alcohol exposure can give rise to orbital hypertelorism, supporting a long-standing suggestion that prenatal alcohol exposure at a particular time causes

  6. [Changes in facial nerve function, morphology and neurotrophic factor III expression following three types of facial nerve injury].

    Science.gov (United States)

    Zhang, Lili; Wang, Haibo; Fan, Zhaomin; Han, Yuechen; Xu, Lei; Zhang, Haiyan

    2011-01-01

    To study the changes in facial nerve function, morphology and neurotrophic factor III (NT-3) expression following three types of facial nerve injury. Changes in facial nerve function (in terms of blink reflex (BF), vibrissae movement (VM) and position of nasal tip) were assessed in 45 rats in response to three types of facial nerve injury: partial section of the extratemporal segment (group one), partial section of the facial canal segment (group two) and complete transection of the facial canal segment lesion (group three). All facial nerves specimen were then cut into two parts at the site of the lesion after being taken from the lesion site on 1st, 7th, 21st post-surgery-days (PSD). Changes of morphology and NT-3 expression were evaluated using the improved trichrome stain and immunohistochemistry techniques ,respectively. Changes in facial nerve function: In group 1, all animals had no blink reflex (BF) and weak vibrissae movement (VM) at the 1st PSD; The blink reflex in 80% of the rats recovered partly and the vibrissae movement in 40% of the rats returned to normal at the 7th PSD; The facial nerve function in 600 of the rats was almost normal at the 21st PSD. In group 2, all left facial nerve paralyzed at the 1st PSD; The blink reflex partly recovered in 40% of the rats and the vibrissae movement was weak in 80% of the rats at the 7th PSD; 8000 of the rats'BF were almost normal and 40% of the rats' VM completely recovered at the 21st PSD. In group 3, The recovery couldn't happen at anytime. Changes in morphology: In group 1, the size of nerve fiber differed in facial canal segment and some of myelin sheath and axons degenerated at the 7th PSD; The fibres' degeneration turned into regeneration at the 21st PSD; In group 2, the morphologic changes in this group were familiar with the group 1 while the degenerated fibers were more and dispersed in transection at the 7th PSD; Regeneration of nerve fibers happened at the 21st PSD. In group 3, most of the fibers

  7. Comparison of hemihypoglossal nerve versus masseteric nerve transpositions in the rehabilitation of short-term facial paralysis using the Facial Clima evaluating system.

    Science.gov (United States)

    Hontanilla, Bernardo; Marré, Diego

    2012-11-01

    Masseteric and hypoglossal nerve transfers are reliable alternatives for reanimating short-term facial paralysis. To date, few studies exist in the literature comparing these techniques. This work presents a quantitative comparison of masseter-facial transposition versus hemihypoglossal facial transposition with a nerve graft using the Facial Clima system. Forty-six patients with complete unilateral facial paralysis underwent reanimation with either hemihypoglossal transposition with a nerve graft (group I, n = 25) or direct masseteric-facial coaptation (group II, n = 21). Commissural displacement and commissural contraction velocity were measured using the Facial Clima system. Postoperative intragroup commissural displacement and commissural contraction velocity means of the reanimated versus the normal side were first compared using a paired sample t test. Then, mean percentages of recovery of both parameters were compared between the groups using an independent sample t test. Onset of movement was also compared between the groups. Significant differences of mean commissural displacement and commissural contraction velocity between the reanimated side and the normal side were observed in group I but not in group II. Mean percentage of recovery of both parameters did not differ between the groups. Patients in group II showed a significantly faster onset of movement compared with those in group I (62 ± 4.6 days versus 136 ± 7.4 days, p = 0.013). Reanimation of short-term facial paralysis can be satisfactorily addressed by means of either hemihypoglossal transposition with a nerve graft or direct masseteric-facial coaptation. However, with the latter, better symmetry and a faster onset of movement are observed. In addition, masseteric nerve transfer avoids morbidity from nerve graft harvesting. Therapeutic, III.

  8. Reproducibility of the dynamics of facial expressions in unilateral facial palsy.

    Science.gov (United States)

    Alagha, M A; Ju, X; Morley, S; Ayoub, A

    2018-02-01

    The aim of this study was to assess the reproducibility of non-verbal facial expressions in unilateral facial paralysis using dynamic four-dimensional (4D) imaging. The Di4D system was used to record five facial expressions of 20 adult patients. The system captured 60 three-dimensional (3D) images per second; each facial expression took 3-4seconds which was recorded in real time. Thus a set of 180 3D facial images was generated for each expression. The procedure was repeated after 30min to assess the reproducibility of the expressions. A mathematical facial mesh consisting of thousands of quasi-point 'vertices' was conformed to the face in order to determine the morphological characteristics in a comprehensive manner. The vertices were tracked throughout the sequence of the 180 images. Five key 3D facial frames from each sequence of images were analyzed. Comparisons were made between the first and second capture of each facial expression to assess the reproducibility of facial movements. Corresponding images were aligned using partial Procrustes analysis, and the root mean square distance between them was calculated and analyzed statistically (paired Student t-test, PFacial expressions of lip purse, cheek puff, and raising of eyebrows were reproducible. Facial expressions of maximum smile and forceful eye closure were not reproducible. The limited coordination of various groups of facial muscles contributed to the lack of reproducibility of these facial expressions. 4D imaging is a useful clinical tool for the assessment of facial expressions. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  9. Time perception and dynamics of facial expressions of emotions.

    Directory of Open Access Journals (Sweden)

    Sophie L Fayolle

    Full Text Available Two experiments were run to examine the effects of dynamic displays of facial expressions of emotions on time judgments. The participants were given a temporal bisection task with emotional facial expressions presented in a dynamic or a static display. Two emotional facial expressions and a neutral expression were tested and compared. Each of the emotional expressions had the same affective valence (unpleasant, but one was high-arousing (expressing anger and the other low-arousing (expressing sadness. Our results showed that time judgments are highly sensitive to movements in facial expressions and the emotions expressed. Indeed, longer perceived durations were found in response to the dynamic faces and the high-arousing emotional expressions compared to the static faces and low-arousing expressions. In addition, the facial movements amplified the effect of emotions on time perception. Dynamic facial expressions are thus interesting tools for examining variations in temporal judgments in different social contexts.

  10. Analysis of facial expressions in parkinson's disease through video-based automatic methods.

    Science.gov (United States)

    Bandini, Andrea; Orlandi, Silvia; Escalante, Hugo Jair; Giovannelli, Fabio; Cincotta, Massimo; Reyes-Garcia, Carlos A; Vanni, Paola; Zaccara, Gaetano; Manfredi, Claudia

    2017-04-01

    The automatic analysis of facial expressions is an evolving field that finds several clinical applications. One of these applications is the study of facial bradykinesia in Parkinson's disease (PD), which is a major motor sign of this neurodegenerative illness. Facial bradykinesia consists in the reduction/loss of facial movements and emotional facial expressions called hypomimia. In this work we propose an automatic method for studying facial expressions in PD patients relying on video-based METHODS: 17 Parkinsonian patients and 17 healthy control subjects were asked to show basic facial expressions, upon request of the clinician and after the imitation of a visual cue on a screen. Through an existing face tracker, the Euclidean distance of the facial model from a neutral baseline was computed in order to quantify the changes in facial expressivity during the tasks. Moreover, an automatic facial expressions recognition algorithm was trained in order to study how PD expressions differed from the standard expressions. Results show that control subjects reported on average higher distances than PD patients along the tasks. This confirms that control subjects show larger movements during both posed and imitated facial expressions. Moreover, our results demonstrate that anger and disgust are the two most impaired expressions in PD patients. Contactless video-based systems can be important techniques for analyzing facial expressions also in rehabilitation, in particular speech therapy, where patients could get a definite advantage from a real-time feedback about the proper facial expressions/movements to perform. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Development of a Support Application and a Textbook for Practicing Facial Expression Detection for Students with Visual Impairment

    Science.gov (United States)

    Saito, Hirotaka; Ando, Akinobu; Itagaki, Shota; Kawada, Taku; Davis, Darold; Nagai, Nobuyuki

    2017-01-01

    Until now, when practicing facial expression recognition skills in nonverbal communication areas of SST, judgment of facial expression was not quantitative because the subjects of SST were judged by teachers. Therefore, we thought whether SST could be performed using facial expression detection devices that can quantitatively measure facial…

  12. Quantitative anatomical analysis of facial expression using a 3D motion capture system: Application to cosmetic surgery and facial recognition technology.

    Science.gov (United States)

    Lee, Jae-Gi; Jung, Su-Jin; Lee, Hyung-Jin; Seo, Jung-Hyuk; Choi, You-Jin; Bae, Hyun-Sook; Park, Jong-Tae; Kim, Hee-Jin

    2015-09-01

    The topography of the facial muscles differs between males and females and among individuals of the same gender. To explain the unique expressions that people can make, it is important to define the shapes of the muscle, their associations with the skin, and their relative functions. Three-dimensional (3D) motion-capture analysis, often used to study facial expression, was used in this study to identify characteristic skin movements in males and females when they made six representative basic expressions. The movements of 44 reflective markers (RMs) positioned on anatomical landmarks were measured. Their mean displacement was large in males [ranging from 14.31 mm (fear) to 41.15 mm (anger)], and 3.35-4.76 mm smaller in females [ranging from 9.55 mm (fear) to 37.80 mm (anger)]. The percentages of RMs involved in the ten highest mean maximum displacement values in making at least one expression were 47.6% in males and 61.9% in females. The movements of the RMs were larger in males than females but were more limited. Expanding our understanding of facial expression requires morphological studies of facial muscles and studies of related complex functionality. Conducting these together with quantitative analyses, as in the present study, will yield data valuable for medicine, dentistry, and engineering, for example, for surgical operations on facial regions, software for predicting changes in facial features and expressions after corrective surgery, and the development of face-mimicking robots. © 2015 Wiley Periodicals, Inc.

  13. Doubly Sparse Relevance Vector Machine for Continuous Facial Behavior Estimation

    NARCIS (Netherlands)

    Kaltwang, Sebastian; Todorovic, Sinisa; Pantic, Maja

    Certain inner feelings and physiological states like pain are subjective states that cannot be directly measured, but can be estimated from spontaneous facial expressions. Since they are typically characterized by subtle movements of facial parts, analysis of the facial details is required. To this

  14. Facial Expression Recognition for Traumatic Brain Injured Patients

    DEFF Research Database (Denmark)

    Ilyas, Chaudhary Muhammad Aqdus; Nasrollahi, Kamal; Moeslund, Thomas B.

    2018-01-01

    In this paper, we investigate the issues associated with facial expression recognition of Traumatic Brain Insured (TBI) patients in a realistic scenario. These patients have restricted or limited muscle movements with reduced facial expressions along with non-cooperative behavior, impaired reason...

  15. Appraisals Generate Specific Configurations of Facial Muscle Movements in a Gambling Task: Evidence for the Component Process Model of Emotion.

    Science.gov (United States)

    Gentsch, Kornelia; Grandjean, Didier; Scherer, Klaus R

    2015-01-01

    facial expressions dynamically over time, immediately after an event is perceived. In addition, our results provide further indications for the chronography of appraisal-driven facial movements and the underlying cognitive processes.

  16. Can a novel smartphone application detect periodic limb movements?

    Science.gov (United States)

    Bhopi, Rashmi; Nagy, David; Erichsen, Daniel

    2012-01-01

    Periodic limb movements (PLMs) are repetitive, stereotypical and unconscious movements, typically of the legs, that occur in sleep and are associated with several sleep disorders. The gold standard for detecting PLMs is overnight electromyography which, although highly sensitive and specific, is time and labour consuming. The current generation of smart phones is equipped with tri-axial accelerometers that record movement. To develop a smart phone application that can detect PLMs remotely. A leg movement sensing application (LMSA) was programmed in iOS 5x and incorporated into an iPhone 4S (Apple INC.). A healthy adult male subject underwent simultaneous EMG and LMSA measurements of voluntary stereotypical leg movements. The mean number of leg movements recorded by EMG and by the LMSA was compared. A total of 403 leg movements were scored by EMG of which the LMSA recorded 392 (97%). There was no statistical difference in mean number of leg movements recorded between the two modalities (p = 0.3). These preliminary results indicate that a smart phone application is able to accurately detect leg movements outside of the hospital environment and may be a useful tool for screening and follow up of patients with PLMs.

  17. Common cues to emotion in the dynamic facial expressions of speech and song.

    Science.gov (United States)

    Livingstone, Steven R; Thompson, William F; Wanderley, Marcelo M; Palmer, Caroline

    2015-01-01

    Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech-song differences. Vocalists' jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech-song. Vocalists' emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists' facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.

  18. Heartbeat Rate Measurement from Facial Video

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Irani, Ramin; Nasrollahi, Kamal

    2016-01-01

    Heartbeat Rate (HR) reveals a person’s health condition. This paper presents an effective system for measuring HR from facial videos acquired in a more realistic environment than the testing environment of current systems. The proposed method utilizes a facial feature point tracking method...... by combining a ‘Good feature to track’ and a ‘Supervised descent method’ in order to overcome the limitations of currently available facial video based HR measuring systems. Such limitations include, e.g., unrealistic restriction of the subject’s movement and artificial lighting during data capture. A face...

  19. Context-sensitive Dynamic Ordinal Regression for Intensity Estimation of Facial Action Units

    NARCIS (Netherlands)

    Rudovic, Ognjen; Pavlovic, Vladimir; Pantic, Maja

    2015-01-01

    Modeling intensity of facial action units from spontaneously displayed facial expressions is challenging mainly because of high variability in subject-specific facial expressiveness, head-movements, illumination changes, etc. These factors make the target problem highly context-sensitive. However,

  20. [Neurological disease and facial recognition].

    Science.gov (United States)

    Kawamura, Mitsuru; Sugimoto, Azusa; Kobayakawa, Mutsutaka; Tsuruya, Natsuko

    2012-07-01

    To discuss the neurological basis of facial recognition, we present our case reports of impaired recognition and a review of previous literature. First, we present a case of infarction and discuss prosopagnosia, which has had a large impact on face recognition research. From a study of patient symptoms, we assume that prosopagnosia may be caused by unilateral right occipitotemporal lesion and right cerebral dominance of facial recognition. Further, circumscribed lesion and degenerative disease may also cause progressive prosopagnosia. Apperceptive prosopagnosia is observed in patients with posterior cortical atrophy (PCA), pathologically considered as Alzheimer's disease, and associative prosopagnosia in frontotemporal lobar degeneration (FTLD). Second, we discuss face recognition as part of communication. Patients with Parkinson disease show social cognitive impairments, such as difficulty in facial expression recognition and deficits in theory of mind as detected by the reading the mind in the eyes test. Pathological and functional imaging studies indicate that social cognitive impairment in Parkinson disease is possibly related to damages in the amygdalae and surrounding limbic system. The social cognitive deficits can be observed in the early stages of Parkinson disease, and even in the prodromal stage, for example, patients with rapid eye movement (REM) sleep behavior disorder (RBD) show impairment in facial expression recognition. Further, patients with myotonic dystrophy type 1 (DM 1), which is a multisystem disease that mainly affects the muscles, show social cognitive impairment similar to that of Parkinson disease. Our previous study showed that facial expression recognition impairment of DM 1 patients is associated with lesion in the amygdalae and insulae. Our study results indicate that behaviors and personality traits in DM 1 patients, which are revealed by social cognitive impairment, are attributable to dysfunction of the limbic system.

  1. Analysis of Facial Expression by Taste Stimulation

    Science.gov (United States)

    Tobitani, Kensuke; Kato, Kunihito; Yamamoto, Kazuhiko

    In this study, we focused on the basic taste stimulation for the analysis of real facial expressions. We considered that the expressions caused by taste stimulation were unaffected by individuality or emotion, that is, such expressions were involuntary. We analyzed the movement of facial muscles by taste stimulation and compared real expressions with artificial expressions. From the result, we identified an obvious difference between real and artificial expressions. Thus, our method would be a new approach for facial expression recognition.

  2. The enlargement of geniculate fossa of facial nerve canal: a new CT finding of facial nerve canal fracture

    International Nuclear Information System (INIS)

    Gong Ruozhen; Li Yuhua; Gong Wuxian; Wu Lebin

    2006-01-01

    Objective: To discuss the value of enlargement of geniculate fossa of facial nerve canal in the diagnosis of facial nerve canal fracture. Methods: Thirty patients with facial nerve canal fracture underwent axial and coronal CT scan. The correlation between the fracture and the enlargement of geniculate fossa of facial nerve canal was analyzed. The ability of showing the fracture and enlargement of geniculate fossa of facial nerve canal in axial and coronal imaging were compared. Results: Fracture of geniculate fossa of facial nerve canal was found in the operation in 30 patients, while the fracture was detected in CT in 18 patients. Enlargement of geniculate ganglion of facial nerve was detected in 30 patients in the operation, while the enlargement of fossa was found in CT in 28 cases. Enlargement and fracture of geniculate fossa of facial nerve canal were both detected in CT images in 18 patients. Only the enlargement of geniculate fossa of facial nerve canal was shown in 12 patients in CT. Conclusion: Enlargement of geniculate fossa of facial nerve canal was a useful finding in the diagnosis of fracture of geniculate fossa in patients with facial paralysis, even no fracture line was shown on CT images. (authors)

  3. Perceptual integration of kinematic components in the recognition of emotional facial expressions.

    Science.gov (United States)

    Chiovetto, Enrico; Curio, Cristóbal; Endres, Dominik; Giese, Martin

    2018-04-01

    According to a long-standing hypothesis in motor control, complex body motion is organized in terms of movement primitives, reducing massively the dimensionality of the underlying control problems. For body movements, this low-dimensional organization has been convincingly demonstrated by the learning of low-dimensional representations from kinematic and EMG data. In contrast, the effective dimensionality of dynamic facial expressions is unknown, and dominant analysis approaches have been based on heuristically defined facial "action units," which reflect contributions of individual face muscles. We determined the effective dimensionality of dynamic facial expressions by learning of a low-dimensional model from 11 facial expressions. We found an amazingly low dimensionality with only two movement primitives being sufficient to simulate these dynamic expressions with high accuracy. This low dimensionality is confirmed statistically, by Bayesian model comparison of models with different numbers of primitives, and by a psychophysical experiment that demonstrates that expressions, simulated with only two primitives, are indistinguishable from natural ones. In addition, we find statistically optimal integration of the emotion information specified by these primitives in visual perception. Taken together, our results indicate that facial expressions might be controlled by a very small number of independent control units, permitting very low-dimensional parametrization of the associated facial expression.

  4. Magnitude Squared of Coherence to Detect Imaginary Movement

    Directory of Open Access Journals (Sweden)

    Sady Antônio Santos Filho

    2009-01-01

    Full Text Available This work investigates the Magnitude Squared of Coherence (MSC for detection of Event Related Potentials (ERPs related to left-hand index finger movement. Initially, ERP presence was examined in different brain areas. To accomplish that, 20 EEG channels were used, positioned according to the 10–20 international system. The grand average, resulting from 10 normal subjects showed, as expected, responses at frontal, central, and parietal areas, particularly evident at the central area (C3, C4, Cz. The MSC, applied to movement imagination related EEG signals, detected a consistent response in frequencies around 0.3–1 Hz (delta band, mainly at central area (C3, Cz, and C4. Ability differences in control imagination among subjects produced different detection performance. Some subjects needed up to 45 events for a detectable response, while for some others only 10 events proved sufficient. Some subjects also required two or three experimental sessions in order to achieve detectable responses. For one subject, response detection was not possible at all. However, due to brain plasticity, it is plausible to expect that training sessions (to practice movement imagination improve signal-noise ratio and lead to better detection using MSC. Results are sufficiently encouraging as to suggest further exploration of MSC for future BCI application.

  5. Dynamic facial expression recognition based on geometric and texture features

    Science.gov (United States)

    Li, Ming; Wang, Zengfu

    2018-04-01

    Recently, dynamic facial expression recognition in videos has attracted growing attention. In this paper, we propose a novel dynamic facial expression recognition method by using geometric and texture features. In our system, the facial landmark movements and texture variations upon pairwise images are used to perform the dynamic facial expression recognition tasks. For one facial expression sequence, pairwise images are created between the first frame and each of its subsequent frames. Integration of both geometric and texture features further enhances the representation of the facial expressions. Finally, Support Vector Machine is used for facial expression recognition. Experiments conducted on the extended Cohn-Kanade database show that our proposed method can achieve a competitive performance with other methods.

  6. Objectively measuring pain using facial expression: is the technology finally ready?

    Science.gov (United States)

    Dawes, Thomas Richard; Eden-Green, Ben; Rosten, Claire; Giles, Julian; Governo, Ricardo; Marcelline, Francesca; Nduka, Charles

    2018-03-01

    Currently, clinicians observe pain-related behaviors and use patient self-report measures in order to determine pain severity. This paper reviews the evidence when facial expression is used as a measure of pain. We review the literature reporting the relevance of facial expression as a diagnostic measure, which facial movements are indicative of pain, and whether such movements can be reliably used to measure pain. We conclude that although the technology for objective pain measurement is not yet ready for use in clinical settings, the potential benefits to patients in improved pain management, combined with the advances being made in sensor technology and artificial intelligence, provide opportunities for research and innovation.

  7. Automatic Change Detection to Facial Expressions in Adolescents: Evidence from Visual Mismatch Negativity Responses

    Directory of Open Access Journals (Sweden)

    Tongran eLiu

    2016-03-01

    Full Text Available Adolescence is a critical period for the neurodevelopment of social-emotional processing, wherein the automatic detection of changes in facial expressions is crucial for the development of interpersonal communication. Two groups of participants (an adolescent group and an adult group were recruited to complete an emotional oddball task featuring on happy and one fearful condition. The measurement of event-related potential (ERP was carried out via electroencephalography (EEG and electrooculography (EOG recording, to detect visual mismatch negativity (vMMN with regard to the automatic detection of changes in facial expressions between the two age groups. The current findings demonstrated that the adolescent group featured more negative vMMN amplitudes than the adult group in the fronto-central region during the 120-200 ms interval. During the time window of 370-450 ms, only the adult group showed better automatic processing on fearful faces than happy faces. The present study indicated that adolescents posses stronger automatic detection of changes in emotional expression relative to adults, and sheds light on the neurodevelopment of automatic processes concerning social-emotional information.

  8. Microfluidic Transducer for Detecting Nanomechanical Movements of Bacteria

    Science.gov (United States)

    Kara, Vural; Ekinci, Kamil

    2017-11-01

    Various nanomechanical movements of bacteria are currently being explored as an indication of bacterial viability. Most notably, these movements have been observed to subside rapidly and dramatically when the bacteria are exposed to an effective antibiotic. This suggests that monitoring bacterial movements, if performed with high fidelity, can offer a path to various clinical microbiological applications, including antibiotic susceptibility tests. Here, we introduce a robust and sensitive microfluidic transduction technique for detecting the nanomechanical movements of bacteria. The technique is based on measuring the electrical fluctuations in a microchannel which the bacteria populate. These electrical fluctuations are caused by the swimming of motile, planktonic bacteria and random oscillations of surface-immobilized bacteria. The technique provides enough sensitivity to detect even the slightest movements of a single cell and lends itself to smooth integration with other microfluidic methods and devices; it may eventually be used for rapid antibiotic susceptibility testing. We acknowledge support from Boston University Office of Technology Development, Boston University College of Engineering, NIH (1R03AI126168-01) and The Wallace H. Coulter Foundation.

  9. Robustness of movement detection techniques from motor execution

    DEFF Research Database (Denmark)

    Aliakbaryhosseinabadi, Susan; Jiang, Ning; Petrini, Laura

    2015-01-01

    subjects completed a set of movement executions prior to and following the oddball paradigm. The locality preserving projection followed by the linear discriminant analysis (LPP-LDA) and the matched-filter (MF) technique were applied offline for detection of movement. Results show that LPP...

  10. Effect of endoscopic brow lift on contractures and synkinesis of the facial muscles in patients with a regenerated postparalytic facial nerve syndrome.

    Science.gov (United States)

    Bran, Gregor M; Börjesson, Pontus K E; Boahene, Kofi D; Gosepath, Jan; Lohuis, Peter J F M

    2014-01-01

    Delayed recovery after facial palsy results in aberrant nerve regeneration with symptomatic movement disorders, summarized as the postparalytic facial nerve syndrome. The authors present an alternative surgical approach for improvement of periocular movement disorders in patients with postparalytic facial nerve syndrome. The authors proposed that endoscopic brow lift leads to an improvement of periocular movement disorders by reducing pathologically raised levels of afferent input. Eleven patients (seven women and four men) with a mean age of 54 years (range, 33 to 85 years) and with postparalytic facial nerve syndrome underwent endoscopic brow lift under general anesthesia. Patients' preoperative condition was compared with their postoperative condition using a retrospective questionnaire. Subjects were also asked to compare the therapeutic effectiveness of endoscopic brow lift and botulinum toxin type A. Mean follow-up was 52 months (range, 22 to 83 months). No intraoperative or postoperative complications occurred. During follow-up, patients and physicians observed an improvement of periorbital contractures and oculofacial synkinesis. Scores on quality of life improved significantly after endoscopic brow lift. Best results were obtained when botulinum toxin type A was adjoined after the endoscopic brow lift. Patients described a cumulative therapeutic effect. These findings suggest endoscopic brow lift as a promising additional treatment modality for the treatment of periocular postparalytic facial nerve syndrome-related symptoms, leading to an improved quality of life. Even though further prospective investigation is needed, a combination of endoscopic brow lift and postsurgical botulinum toxin type A administration could become a new therapeutic standard.

  11. Outcome of different facial nerve reconstruction techniques

    Directory of Open Access Journals (Sweden)

    Aboshanif Mohamed

    Full Text Available Abstract Introduction: There is no technique of facial nerve reconstruction that guarantees facial function recovery up to grade III. Objective: To evaluate the efficacy and safety of different facial nerve reconstruction techniques. Methods: Facial nerve reconstruction was performed in 22 patients (facial nerve interpositional graft in 11 patients and hypoglossal-facial nerve transfer in another 11 patients. All patients had facial function House-Brackmann (HB grade VI, either caused by trauma or after resection of a tumor. All patients were submitted to a primary nerve reconstruction except 7 patients, where late reconstruction was performed two weeks to four months after the initial surgery. The follow-up period was at least two years. Results: For facial nerve interpositional graft technique, we achieved facial function HB grade III in eight patients and grade IV in three patients. Synkinesis was found in eight patients, and facial contracture with synkinesis was found in two patients. In regards to hypoglossal-facial nerve transfer using different modifications, we achieved facial function HB grade III in nine patients and grade IV in two patients. Facial contracture, synkinesis and tongue atrophy were found in three patients, and synkinesis was found in five patients. However, those who had primary direct facial-hypoglossal end-to-side anastomosis showed the best result without any neurological deficit. Conclusion: Among various reanimation techniques, when indicated, direct end-to-side facial-hypoglossal anastomosis through epineural suturing is the most effective technique with excellent outcomes for facial reanimation and preservation of tongue movement, particularly when performed as a primary technique.

  12. [Facial nerve injuries cause changes in central nervous system microglial cells].

    Science.gov (United States)

    Cerón, Jeimmy; Troncoso, Julieta

    2016-12-01

    Our research group has described both morphological and electrophysiological changes in motor cortex pyramidal neurons associated with contralateral facial nerve injury in rats. However, little is known about those neural changes, which occur together with changes in surrounding glial cells. To characterize the effect of the unilateral facial nerve injury on microglial proliferation and activation in the primary motor cortex. We performed immunohistochemical experiments in order to detect microglial cells in brain tissue of rats with unilateral facial nerve lesion sacrificed at different times after the injury. We caused two types of lesions: reversible (by crushing, which allows functional recovery), and irreversible (by section, which produces permanent paralysis). We compared the brain tissues of control animals (without surgical intervention) and sham-operated animals with animals with lesions sacrificed at 1, 3, 7, 21 or 35 days after the injury. In primary motor cortex, the microglial cells of irreversibly injured animals showed proliferation and activation between three and seven days post-lesion. The proliferation of microglial cells in reversibly injured animals was significant only three days after the lesion. Facial nerve injury causes changes in microglial cells in the primary motor cortex. These modifications could be involved in the generation of morphological and electrophysiological changes previously described in the pyramidal neurons of primary motor cortex that command facial movements.

  13. Outcome of different facial nerve reconstruction techniques.

    Science.gov (United States)

    Mohamed, Aboshanif; Omi, Eigo; Honda, Kohei; Suzuki, Shinsuke; Ishikawa, Kazuo

    There is no technique of facial nerve reconstruction that guarantees facial function recovery up to grade III. To evaluate the efficacy and safety of different facial nerve reconstruction techniques. Facial nerve reconstruction was performed in 22 patients (facial nerve interpositional graft in 11 patients and hypoglossal-facial nerve transfer in another 11 patients). All patients had facial function House-Brackmann (HB) grade VI, either caused by trauma or after resection of a tumor. All patients were submitted to a primary nerve reconstruction except 7 patients, where late reconstruction was performed two weeks to four months after the initial surgery. The follow-up period was at least two years. For facial nerve interpositional graft technique, we achieved facial function HB grade III in eight patients and grade IV in three patients. Synkinesis was found in eight patients, and facial contracture with synkinesis was found in two patients. In regards to hypoglossal-facial nerve transfer using different modifications, we achieved facial function HB grade III in nine patients and grade IV in two patients. Facial contracture, synkinesis and tongue atrophy were found in three patients, and synkinesis was found in five patients. However, those who had primary direct facial-hypoglossal end-to-side anastomosis showed the best result without any neurological deficit. Among various reanimation techniques, when indicated, direct end-to-side facial-hypoglossal anastomosis through epineural suturing is the most effective technique with excellent outcomes for facial reanimation and preservation of tongue movement, particularly when performed as a primary technique. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  14. Moving to continuous facial expression space using the MPEG-4 facial definition parameter (FDP) set

    Science.gov (United States)

    Karpouzis, Kostas; Tsapatsoulis, Nicolas; Kollias, Stefanos D.

    2000-06-01

    Research in facial expression has concluded that at least six emotions, conveyed by human faces, are universally associated with distinct expressions. Sadness, anger, joy, fear, disgust and surprise are categories of expressions that are recognizable across cultures. In this work we form a relation between the description of the universal expressions and the MPEG-4 Facial Definition Parameter Set (FDP). We also investigate the relation between the movement of basic FDPs and the parameters that describe emotion-related words according to some classical psychological studies. In particular Whissel suggested that emotions are points in a space, which seem to occupy two dimensions: activation and evaluation. We show that some of the MPEG-4 Facial Animation Parameters (FAPs), approximated by the motion of the corresponding FDPs, can be combined by means of a fuzzy rule system to estimate the activation parameter. In this way variations of the six archetypal emotions can be achieved. Moreover, Plutchik concluded that emotion terms are unevenly distributed through the space defined by dimensions like Whissel's; instead they tend to form an approximately circular pattern, called 'emotion wheel,' modeled using an angular measure. The 'emotion wheel' can be defined as a reference for creating intermediate expressions from the universal ones, by interpolating the movement of dominant FDP points between neighboring basic expressions. By exploiting the relation between the movement of the basic FDP point and the activation and angular parameters we can model more emotions than the primary ones and achieve efficient recognition in video sequences.

  15. Enhancing facial aesthetics with muscle retraining exercises-a review.

    Science.gov (United States)

    D'souza, Raina; Kini, Ashwini; D'souza, Henston; Shetty, Nitin; Shetty, Omkar

    2014-08-01

    Facial attractiveness plays a key role in social interaction. 'Smile' is not only a single category of facial behaviour, but also the emotion of frank joy which is expressed on the face by the combined contraction of the muscles involved. When a patient visits the dental clinic for aesthetic reasons, the dentist considers not only the chief complaint but also the overall harmony of the face. This article describes muscle retraining exercises to achieve control over facial movements and improve facial appearance which may be considered following any type of dental rehabilitation. Muscle conditioning, training and strengthening through daily exercises will help to counter balance the aging effects.

  16. Regional Brain Responses Are Biased Toward Infant Facial Expressions Compared to Adult Facial Expressions in Nulliparous Women.

    Science.gov (United States)

    Li, Bingbing; Cheng, Gang; Zhang, Dajun; Wei, Dongtao; Qiao, Lei; Wang, Xiangpeng; Che, Xianwei

    2016-01-01

    Recent neuroimaging studies suggest that neutral infant faces compared to neutral adult faces elicit greater activity in brain areas associated with face processing, attention, empathic response, reward, and movement. However, whether infant facial expressions evoke larger brain responses than adult facial expressions remains unclear. Here, we performed event-related functional magnetic resonance imaging in nulliparous women while they were presented with images of matched unfamiliar infant and adult facial expressions (happy, neutral, and uncomfortable/sad) in a pseudo-randomized order. We found that the bilateral fusiform and right lingual gyrus were overall more activated during the presentation of infant facial expressions compared to adult facial expressions. Uncomfortable infant faces compared to sad adult faces evoked greater activation in the bilateral fusiform gyrus, precentral gyrus, postcentral gyrus, posterior cingulate cortex-thalamus, and precuneus. Neutral infant faces activated larger brain responses in the left fusiform gyrus compared to neutral adult faces. Happy infant faces compared to happy adult faces elicited larger responses in areas of the brain associated with emotion and reward processing using a more liberal threshold of p facial expressions compared to adult facial expressions among nulliparous women, and this bias may be modulated by individual differences in Interest-In-Infants and perspective taking ability.

  17. Outcomes of Direct Facial-to-Hypoglossal Neurorrhaphy with Parotid Release.

    Science.gov (United States)

    Jacobson, Joel; Rihani, Jordan; Lin, Karen; Miller, Phillip J; Roland, J Thomas

    2011-01-01

    Lesions of the temporal bone and cerebellopontine angle and their management can result in facial nerve paralysis. When the nerve deficit is not amenable to primary end-to-end repair or interpositional grafting, nerve transposition can be used to accomplish the goals of restoring facial tone, symmetry, and voluntary movement. The most widely used nerve transposition is the hypoglossal-facial nerve anastamosis, of which there are several technical variations. Previously we described a technique of single end-to-side anastamosis using intratemporal facial nerve mobilization and parotid release. This study further characterizes the results of this technique with a larger patient cohort and longer-term follow-up. The design of this study is a retrospective chart review and the setting is an academic tertiary care referral center. Twenty-one patients with facial nerve paralysis from proximal nerve injury at the cerebellopontine angle underwent facial-hypoglossal neurorraphy with parotid release. Outcomes were assessed using the Repaired Facial Nerve Recovery Scale, questionnaires, and patient photographs. Of the 21 patients, 18 were successfully reinnervated to a score of a B or C on the recovery scale, which equates to good oral and ocular sphincter closure with minimal mass movement. The mean duration of paralysis between injury and repair was 12.1 months (range 0 to 36 months) with a mean follow-up of 55 months. There were no cases of hemiglossal atrophy, paralysis, or subjective dysfunction. Direct facial-hypoglossal neurorrhaphy with parotid release achieved a functional reinnervation and good clinical outcome in the majority of patients, with minimal lingual morbidity. This technique is a viable option for facial reanimation and should be strongly considered as a surgical option for the paralyzed face.

  18. Operant conditioning of facial displays of pain.

    Science.gov (United States)

    Kunz, Miriam; Rainville, Pierre; Lautenbacher, Stefan

    2011-06-01

    The operant model of chronic pain posits that nonverbal pain behavior, such as facial expressions, is sensitive to reinforcement, but experimental evidence supporting this assumption is sparse. The aim of the present study was to investigate in a healthy population a) whether facial pain behavior can indeed be operantly conditioned using a discriminative reinforcement schedule to increase and decrease facial pain behavior and b) to what extent these changes affect pain experience indexed by self-ratings. In the experimental group (n = 29), the participants were reinforced every time that they showed pain-indicative facial behavior (up-conditioning) or a neutral expression (down-conditioning) in response to painful heat stimulation. Once facial pain behavior was successfully up- or down-conditioned, respectively (which occurred in 72% of participants), facial pain displays and self-report ratings were assessed. In addition, a control group (n = 11) was used that was yoked to the reinforcement plans of the experimental group. During the conditioning phases, reinforcement led to significant changes in facial pain behavior in the majority of the experimental group (p .136). Fine-grained analyses of facial muscle movements revealed a similar picture. Furthermore, the decline in facial pain displays (as observed during down-conditioning) strongly predicted changes in pain ratings (R(2) = 0.329). These results suggest that a) facial pain displays are sensitive to reinforcement and b) that changes in facial pain displays can affect self-report ratings.

  19. Mapping and Manipulating Facial Expression

    Science.gov (United States)

    Theobald, Barry-John; Matthews, Iain; Mangini, Michael; Spies, Jeffrey R.; Brick, Timothy R.; Cohn, Jeffrey F.; Boker, Steven M.

    2009-01-01

    Nonverbal visual cues accompany speech to supplement the meaning of spoken words, signify emotional state, indicate position in discourse, and provide back-channel feedback. This visual information includes head movements, facial expressions and body gestures. In this article we describe techniques for manipulating both verbal and nonverbal facial…

  20. Probabilistic BPRRC: Robust Change Detection against Illumination Changes and Background Movements

    Science.gov (United States)

    Yokoi, Kentaro

    This paper presents Probabilistic Bi-polar Radial Reach Correlation (PrBPRRC), a change detection method that is robust against illumination changes and background movements. Most of the traditional change detection methods are robust against either illumination changes or background movements; BPRRC is one of the illumination-robust change detection methods. We introduce a probabilistic background texture model into BPRRC and add the robustness against background movements including foreground invasions such as moving cars, walking people, swaying trees, and falling snow. We show the superiority of PrBPRRC in the environment with illumination changes and background movements by using three public datasets and one private dataset: ATON Highway data, Karlsruhe traffic sequence data, PETS 2007 data, and Walking-in-a-room data.

  1. A study on facial expressions recognition

    Science.gov (United States)

    Xu, Jingjing

    2017-09-01

    In terms of communication, postures and facial expressions of such feelings like happiness, anger and sadness play important roles in conveying information. With the development of the technology, recently a number of algorithms dealing with face alignment, face landmark detection, classification, facial landmark localization and pose estimation have been put forward. However, there are a lot of challenges and problems need to be fixed. In this paper, a few technologies have been concluded and analyzed, and they all relate to handling facial expressions recognition and poses like pose-indexed based multi-view method for face alignment, robust facial landmark detection under significant head pose and occlusion, partitioning the input domain for classification, robust statistics face formalization.

  2. A glasses-type wearable device for monitoring the patterns of food intake and facial activity

    Science.gov (United States)

    Chung, Jungman; Chung, Jungmin; Oh, Wonjun; Yoo, Yongkyu; Lee, Won Gu; Bang, Hyunwoo

    2017-01-01

    Here we present a new method for automatic and objective monitoring of ingestive behaviors in comparison with other facial activities through load cells embedded in a pair of glasses, named GlasSense. Typically, activated by subtle contraction and relaxation of a temporalis muscle, there is a cyclic movement of the temporomandibular joint during mastication. However, such muscular signals are, in general, too weak to sense without amplification or an electromyographic analysis. To detect these oscillatory facial signals without any use of obtrusive device, we incorporated a load cell into each hinge which was used as a lever mechanism on both sides of the glasses. Thus, the signal measured at the load cells can detect the force amplified mechanically by the hinge. We demonstrated a proof-of-concept validation of the amplification by differentiating the force signals between the hinge and the temple. A pattern recognition was applied to extract statistical features and classify featured behavioral patterns, such as natural head movement, chewing, talking, and wink. The overall results showed that the average F1 score of the classification was about 94.0% and the accuracy above 89%. We believe this approach will be helpful for designing a non-intrusive and un-obtrusive eyewear-based ingestive behavior monitoring system.

  3. Facial expressions of emotion are not culturally universal.

    Science.gov (United States)

    Jack, Rachael E; Garrod, Oliver G B; Yu, Hui; Caldara, Roberto; Schyns, Philippe G

    2012-05-08

    Since Darwin's seminal works, the universality of facial expressions of emotion has remained one of the longest standing debates in the biological and social sciences. Briefly stated, the universality hypothesis claims that all humans communicate six basic internal emotional states (happy, surprise, fear, disgust, anger, and sad) using the same facial movements by virtue of their biological and evolutionary origins [Susskind JM, et al. (2008) Nat Neurosci 11:843-850]. Here, we refute this assumed universality. Using a unique computer graphics platform that combines generative grammars [Chomsky N (1965) MIT Press, Cambridge, MA] with visual perception, we accessed the mind's eye of 30 Western and Eastern culture individuals and reconstructed their mental representations of the six basic facial expressions of emotion. Cross-cultural comparisons of the mental representations challenge universality on two separate counts. First, whereas Westerners represent each of the six basic emotions with a distinct set of facial movements common to the group, Easterners do not. Second, Easterners represent emotional intensity with distinctive dynamic eye activity. By refuting the long-standing universality hypothesis, our data highlight the powerful influence of culture on shaping basic behaviors once considered biologically hardwired. Consequently, our data open a unique nature-nurture debate across broad fields from evolutionary psychology and social neuroscience to social networking via digital avatars.

  4. MR imaging of the intraparotid facial nerve

    International Nuclear Information System (INIS)

    Kurihara, Hiroaki; Iwasawa, Tae; Yoshida, Tetsuo; Furukawa, Masaki

    1996-01-01

    Using a 1.5T MR imaging system, seven normal volunteers and 6 patients with parotid tumors were studied and their intraparotid facial nerves were directly imaged. The findings were evaluated by T1-weighted axial, sagittal and oblique images. The facial nerve appeared to be relatively hypointensive within the highsignal parotid parenchyma, and the main trunks of the facial nerves were observed directly in all the cases examined. Their main divisions were detected in all the volunteers and 5 of 6 patients were imaged obliquely. The facial nerves run in various fashions and so the oblique scan planes were determined individually to detect this running figure directly. To verify our observations, surgical findings of the facial nerve were compared with the MR images or results. (author)

  5. Shared Gaussian Process Latent Variable Model for Multi-view Facial Expression Recognition

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    Facial-expression data often appear in multiple views either due to head-movements or the camera position. Existing methods for multi-view facial expression recognition perform classification of the target expressions either by using classifiers learned separately for each view or by using a single

  6. Sad Facial Expressions Increase Choice Blindness

    Directory of Open Access Journals (Sweden)

    Yajie Wang

    2018-01-01

    Full Text Available Previous studies have discovered a fascinating phenomenon known as choice blindness—individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1 and faces with happy and neutral expressions (Experiment 2 in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral expressions also tended to show a lower detection rate of sad (as compared to neutral faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions.

  7. Sad Facial Expressions Increase Choice Blindness.

    Science.gov (United States)

    Wang, Yajie; Zhao, Song; Zhang, Zhijie; Feng, Wenfeng

    2017-01-01

    Previous studies have discovered a fascinating phenomenon known as choice blindness-individuals fail to detect mismatches between the face they choose and the face replaced by the experimenter. Although previous studies have reported a couple of factors that can modulate the magnitude of choice blindness, the potential effect of facial expression on choice blindness has not yet been explored. Using faces with sad and neutral expressions (Experiment 1) and faces with happy and neutral expressions (Experiment 2) in the classic choice blindness paradigm, the present study investigated the effects of facial expressions on choice blindness. The results showed that the detection rate was significantly lower on sad faces than neutral faces, whereas no significant difference was observed between happy faces and neutral faces. The exploratory analysis of verbal reports found that participants who reported less facial features for sad (as compared to neutral) expressions also tended to show a lower detection rate of sad (as compared to neutral) faces. These findings indicated that sad facial expressions increased choice blindness, which might have resulted from inhibition of further processing of the detailed facial features by the less attractive sad expressions (as compared to neutral expressions).

  8. Facial expressions as a model to test the role of the sensorimotor system in the visual perception of the actions.

    Science.gov (United States)

    Mele, Sonia; Ghirardi, Valentina; Craighero, Laila

    2017-12-01

    A long-term debate concerns whether the sensorimotor coding carried out during transitive actions observation reflects the low-level movement implementation details or the movement goals. On the contrary, phonemes and emotional facial expressions are intransitive actions that do not fall into this debate. The investigation of phonemes discrimination has proven to be a good model to demonstrate that the sensorimotor system plays a role in understanding actions acoustically presented. In the present study, we adapted the experimental paradigms already used in phonemes discrimination during face posture manipulation, to the discrimination of emotional facial expressions. We submitted participants to a lower or to an upper face posture manipulation during the execution of a four alternative labelling task of pictures randomly taken from four morphed continua between two emotional facial expressions. The results showed that the implementation of low-level movement details influence the discrimination of ambiguous facial expressions differing for a specific involvement of those movement details. These findings indicate that facial expressions discrimination is a good model to test the role of the sensorimotor system in the perception of actions visually presented.

  9. Fully Automatic Recognition of the Temporal Phases of Facial Actions

    NARCIS (Netherlands)

    Valstar, M.F.; Pantic, Maja

    Past work on automatic analysis of facial expressions has focused mostly on detecting prototypic expressions of basic emotions like happiness and anger. The method proposed here enables the detection of a much larger range of facial behavior by recognizing facial muscle actions [action units (AUs)

  10. Spontaneous and posed facial expression in Parkinson's disease.

    Science.gov (United States)

    Smith, M C; Smith, M K; Ellgring, H

    1996-09-01

    Spontaneous and posed emotional facial expressions in individuals with Parkinson's disease (PD, n = 12) were compared with those of healthy age-matched controls (n = 12). The intensity and amount of facial expression in PD patients were expected to be reduced for spontaneous but not posed expressions. Emotional stimuli were video clips selected from films, 2-5 min in duration, designed to elicit feelings of happiness, sadness, fear, disgust, or anger. Facial movements were coded using Ekman and Friesen's (1978) Facial Action Coding System (FACS). In addition, participants rated their emotional experience on 9-point Likert scales. The PD group showed significantly less overall facial reactivity than did controls when viewing the films. The predicted Group X Condition (spontaneous vs. posed) interaction effect on smile intensity was found when PD participants with more severe disease were compared with those with milder disease and with controls. In contrast, ratings of emotional experience were similar for both groups. Depression was positively associated with emotion rating but not with measures of facial activity. Spontaneous facial expression appears to be selectively affected in PD, whereas posed expression and emotional experience remain relatively intact.

  11. Fetal movement detection: comparison of the Toitu actograph with ultrasound from 20 weeks gestation.

    Science.gov (United States)

    DiPietro, J A; Costigan, K A; Pressman, E K

    1999-01-01

    This study evaluates the validity of Doppler-detected fetal movement by a commercially available monitor and investigates whether characteristics of maternal body habitus and the intrauterine environment affect its performance. Fetal movement was evaluated in normal pregnancies using both ultrasound visualization and a fetal actocardiograph (Toitu MT320; Tofa Medical Inc., Malvern, PA). Data were collected for 32 min on 34 fetuses stratified by gestational age (20-25 weeks; 28-32 weeks; 35-39 weeks). Fetal and maternal characteristics were recorded. Comparisons between ultrasound-detected trunk and limb movements and actograph records were conducted based both on 10-s time intervals and on detection of individual movements. Time-based comparisons indicated agreement between ultrasound and actograph 94.7% of the time; this association rose to 98% when movements of less than 1 s duration were excluded. Individual movements observed on ultrasound were detected by the actograph 91% of the time, and 97% of the time when brief, isolated movements were excluded. The overall kappa value for agreement was 0.88. The actograph was reliable in detecting periods of quiescence as well as activity. These findings did not vary by gestational age. The number of movements detected by the actograph, but not the single-transducer ultrasound, significantly increased over gestation. Maternal age, parity, weight, height, or body mass index were not consistently associated with actograph validity. Characteristics of the uterine environment, including placenta location, fetal presentation, and amniotic fluid volume also did not affect results. The Toitu actograph accurately detects fetal movement and quiescence from as early as 20 weeks gestation and has utility in both clinical and research settings. Actographs are most useful for providing objective and quantifiable measures of fetal activity level, including number and duration of movements, while visualization through ultrasound is

  12. Facial expression system on video using widrow hoff

    Science.gov (United States)

    Jannah, M.; Zarlis, M.; Mawengkang, H.

    2018-03-01

    Facial expressions recognition is one of interesting research. This research contains human feeling to computer application Such as the interaction between human and computer, data compression, facial animation and facial detection from the video. The purpose of this research is to create facial expression system that captures image from the video camera. The system in this research uses Widrow-Hoff learning method in training and testing image with Adaptive Linear Neuron (ADALINE) approach. The system performance is evaluated by two parameters, detection rate and false positive rate. The system accuracy depends on good technique and face position that trained and tested.

  13. Discovering cultural differences (and similarities) in facial expressions of emotion.

    Science.gov (United States)

    Chen, Chaona; Jack, Rachael E

    2017-10-01

    Understanding the cultural commonalities and specificities of facial expressions of emotion remains a central goal of Psychology. However, recent progress has been stayed by dichotomous debates (e.g. nature versus nurture) that have created silos of empirical and theoretical knowledge. Now, an emerging interdisciplinary scientific culture is broadening the focus of research to provide a more unified and refined account of facial expressions within and across cultures. Specifically, data-driven approaches allow a wider, more objective exploration of face movement patterns that provide detailed information ontologies of their cultural commonalities and specificities. Similarly, a wider exploration of the social messages perceived from face movements diversifies knowledge of their functional roles (e.g. the 'fear' face used as a threat display). Together, these new approaches promise to diversify, deepen, and refine knowledge of facial expressions, and deliver the next major milestones for a functional theory of human social communication that is transferable to social robotics. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. The improvement of movement and speech during rapid eye movement sleep behaviour disorder in multiple system atrophy.

    Science.gov (United States)

    De Cock, Valérie Cochen; Debs, Rachel; Oudiette, Delphine; Leu, Smaranda; Radji, Fatai; Tiberge, Michel; Yu, Huan; Bayard, Sophie; Roze, Emmanuel; Vidailhet, Marie; Dauvilliers, Yves; Rascol, Olivier; Arnulf, Isabelle

    2011-03-01

    Multiple system atrophy is an atypical parkinsonism characterized by severe motor disabilities that are poorly levodopa responsive. Most patients develop rapid eye movement sleep behaviour disorder. Because parkinsonism is absent during rapid eye movement sleep behaviour disorder in patients with Parkinson's disease, we studied the movements of patients with multiple system atrophy during rapid eye movement sleep. Forty-nine non-demented patients with multiple system atrophy and 49 patients with idiopathic Parkinson's disease were interviewed along with their 98 bed partners using a structured questionnaire. They rated the quality of movements, vocal and facial expressions during rapid eye movement sleep behaviour disorder as better than, equal to or worse than the same activities in an awake state. Sleep and movements were monitored using video-polysomnography in 22/49 patients with multiple system atrophy and in 19/49 patients with Parkinson's disease. These recordings were analysed for the presence of parkinsonism and cerebellar syndrome during rapid eye movement sleep movements. Clinical rapid eye movement sleep behaviour disorder was observed in 43/49 (88%) patients with multiple system atrophy. Reports from the 31/43 bed partners who were able to evaluate movements during sleep indicate that 81% of the patients showed some form of improvement during rapid eye movement sleep behaviour disorder. These included improved movement (73% of patients: faster, 67%; stronger, 52%; and smoother, 26%), improved speech (59% of patients: louder, 55%; more intelligible, 17%; and better articulated, 36%) and normalized facial expression (50% of patients). The rate of improvement was higher in Parkinson's disease than in multiple system atrophy, but no further difference was observed between the two forms of multiple system atrophy (predominant parkinsonism versus cerebellar syndrome). Video-monitored movements during rapid eye movement sleep in patients with multiple system

  15. Detection of movement intention using EEG in a human-robot interaction environment

    Directory of Open Access Journals (Sweden)

    Ernesto Pablo Lana

    Full Text Available Introduction : This paper presents a detection method for upper limb movement intention as part of a brain-machine interface using EEG signals, whose final goal is to assist disabled or vulnerable people with activities of daily living. Methods EEG signals were recorded from six naïve healthy volunteers while performing a motor task. Every volunteer remained in an acoustically isolated recording room. The robot was placed in front of the volunteers such that it seemed to be a mirror of their right arm, emulating a Brain Machine Interface environment. The volunteers were seated in an armchair throughout the experiment, outside the reaching area of the robot to guarantee safety. Three conditions are studied: observation, execution, and imagery of right arm’s flexion and extension movements paced by an anthropomorphic manipulator robot. The detector of movement intention uses the spectral F test for discrimination of conditions and uses as feature the desynchronization patterns found on the volunteers. Using a detector provides an objective method to acknowledge for the occurrence of movement intention. Results When using four realizations of the task, detection rates ranging from 53 to 97% were found in five of the volunteers when the movement was executed, in three of them when the movement was imagined, and in two of them when the movement was observed. Conclusions Detection rates for movement observation raises the question of how the visual feedback may affect the performance of a working brain-machine interface, posing another challenge for the upcoming interface implementation. Future developments will focus on the improvement of feature extraction and detection accuracy for movement intention using EEG data.

  16. Greater perceptual sensitivity to happy facial expression.

    Science.gov (United States)

    Maher, Stephen; Ekstrom, Tor; Chen, Yue

    2014-01-01

    Perception of subtle facial expressions is essential for social functioning; yet it is unclear if human perceptual sensitivities differ in detecting varying types of facial emotions. Evidence diverges as to whether salient negative versus positive emotions (such as sadness versus happiness) are preferentially processed. Here, we measured perceptual thresholds for the detection of four types of emotion in faces--happiness, fear, anger, and sadness--using psychophysical methods. We also evaluated the association of the perceptual performances with facial morphological changes between neutral and respective emotion types. Human observers were highly sensitive to happiness compared with the other emotional expressions. Further, this heightened perceptual sensitivity to happy expressions can be attributed largely to the emotion-induced morphological change of a particular facial feature (end-lip raise).

  17. Pick on someone your own size: the detection of threatening facial expressions posed by both child and adult models.

    Science.gov (United States)

    LoBue, Vanessa; Matthews, Kaleigh; Harvey, Teresa; Thrasher, Cat

    2014-02-01

    For decades, researchers have documented a bias for the rapid detection of angry faces in adult, child, and even infant participants. However, despite the age of the participant, the facial stimuli used in all of these experiments were schematic drawings or photographs of adult faces. The current research is the first to examine the detection of both child and adult emotional facial expressions. In our study, 3- to 5-year-old children and adults detected angry, sad, and happy faces among neutral distracters. The depicted faces were of adults or of other children. As in previous work, children detected angry faces more quickly than happy and neutral faces overall, and they tended to detect the faces of other children more quickly than the faces of adults. Adults also detected angry faces more quickly than happy and sad faces even when the faces depicted child models. The results are discussed in terms of theoretical implications for the development of a bias for threat in detection. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. The review and results of different methods for facial recognition

    Science.gov (United States)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  19. Detection of movement artifact in recorded pulse oximeter saturation.

    Science.gov (United States)

    Poets, C F; Stebbens, V A

    1997-10-01

    Movement artifact (MA) must be detected when analysing recordings of pulse oximeter saturation (SpO2). Visual analysis of individual pulse waveforms is the safest, but also the most tedious, method for this purpose. We wanted to test the reliability of a computer algorithm (Edentec Motion Annotation System), based on a comparison between pulse and heart rate, for MA detection. Ten 12-h recordings of SpO2, pulse waveforms and heart rate from ten preterm infants were analysed for the presence of MA on the pulse waveform signal. These data were used to determine the sensitivity and specificity of the computer algorithm, and of the oximeter itself, in detecting MA. Recordings were divided into segments of 2.5 s duration to compare the movement identification methods. Of the segments 31% +/- 6% (mean +/- SD) contained MA. The computer algorithm identified 95% +/- 3% of these segments, the pulse oximeter only 18% +/- 11%. Specificity was 85% +/- 4% and 99% +/- 0%, respectively. SpO2 was signal showed MA during this time, leaving a significant potential for erroneous identification of hypoxaemia. Recordings of SpO2 do not allow a reliable identification of MA. Without additional information about movement artifact, a significant proportion of recording time of pulse oximeter signal may be regarded as demonstrating hypoxaemia which, in fact, simply reflects poor measurement conditions. The computer algorithm used in this study identified periods of movement artifact reliably.

  20. Association of impaired facial affect recognition with basic facial and visual processing deficits in schizophrenia.

    Science.gov (United States)

    Norton, Daniel; McBain, Ryan; Holt, Daphne J; Ongur, Dost; Chen, Yue

    2009-06-15

    Impaired emotion recognition has been reported in schizophrenia, yet the nature of this impairment is not completely understood. Recognition of facial emotion depends on processing affective and nonaffective facial signals, as well as basic visual attributes. We examined whether and how poor facial emotion recognition in schizophrenia is related to basic visual processing and nonaffective face recognition. Schizophrenia patients (n = 32) and healthy control subjects (n = 29) performed emotion discrimination, identity discrimination, and visual contrast detection tasks, where the emotionality, distinctiveness of identity, or visual contrast was systematically manipulated. Subjects determined which of two presentations in a trial contained the target: the emotional face for emotion discrimination, a specific individual for identity discrimination, and a sinusoidal grating for contrast detection. Patients had significantly higher thresholds (worse performance) than control subjects for discriminating both fearful and happy faces. Furthermore, patients' poor performance in fear discrimination was predicted by performance in visual detection and face identity discrimination. Schizophrenia patients require greater emotional signal strength to discriminate fearful or happy face images from neutral ones. Deficient emotion recognition in schizophrenia does not appear to be determined solely by affective processing but is also linked to the processing of basic visual and facial information.

  1. High-intensity facial nerve lesions on T2-weighted images in chronic persistent facial nerve palsy

    Energy Technology Data Exchange (ETDEWEB)

    Kinoshita, T. [Dept. of Radiology, Sendai City Hospital, Sendai (Japan); Dept. of Radiology, Tottori Univ. (Japan); Ishii, K. [Dept. of Radiology, Sendai City Hospital, Sendai (Japan); Okitsu, T. [Dept. of Otolaryngology, Sendai City Hospital (Japan); Ogawa, T. [Dept. of Radiology, Tottori Univ. (Japan); Okudera, T. [Dept. of Radiology, Research Inst. of Brain and Blood Vessels-Akita, Akita (Japan)

    2001-05-01

    Our aim was to estimate the value of MRI in detecting irreversibly paralysed facial nerves. We examined 95 consecutive patients with a facial nerve palsy (14 with a persistent palsy, and 81 with good recovery), using a 1.0 T unit, with T2-weighted and contrast-enhanced T1-weighted images. The geniculate ganglion and tympanic segment had gave high signal on T2-weighted images in the chronic stage of persistent palsy, but not in acute palsy. The enhancement pattern of the facial nerve in the chronic persistent facial nerve palsy is similar to that in the acute palsy with good recovery. These findings suggest that T2-weighted MRI can be used to show severely damaged facial nerves. (orig.)

  2. Facial reanimation with masseteric nerve: babysitter or permanent procedure? Preliminary results.

    Science.gov (United States)

    Faria, Jose Carlos Marques; Scopel, Gean Paulo; Ferreira, Marcus Castro

    2010-01-01

    The authors are presenting a series of 10 cases of complete unilateral facial paralysis submitted to (I) end-to-end microsurgical coaptation of the masseteric branch of the trigeminal nerve and distal branches of the paralyzed facial nerve, and (II) cross-face sural nerve graft. The ages of the patients ranged from 5 to 63 years (mean: 44.1 years), and 8 (80%) of the patients were females. The duration of paralysis was no longer than 18 months (mean: 9.7 months). Follow-up varied from 6 to 18 months (mean: 12.6 months). Initial voluntary facial movements were observed between 3 and 6 months postoperatively (mean: 4.3 months). All patients were able to produce the appearance of a smile when asked to clench their teeth. Comparing the definition of the nasolabial fold and the degree of movement of the modiolus on both sides of the face, the voluntary smile was considered symmetrical in 8 cases. Recovery of the capacity to blink spontaneously was not observed. However, 8 patients were able to reduce or suspend the application of artificial tears. The authors suggest consideration of masseteric-facial nerve coaptation, whether temporary (baby-sitter) or permanent, as the principal alternative for reconstruction of facial paralysis due to irreversible nerve lesion with less than 18 months of duration.

  3. Dissociation between facial and bodily expressions in emotion recognition: A case study.

    Science.gov (United States)

    Leiva, Samanta; Margulis, Laura; Micciulli, Andrea; Ferreres, Aldo

    2017-12-21

    Existing single-case studies have reported deficit in recognizing basic emotions through facial expression and unaffected performance with body expressions, but not the opposite pattern. The aim of this paper is to present a case study with impaired emotion recognition through body expressions and intact performance with facial expressions. In this single-case study we assessed a 30-year-old patient with autism spectrum disorder, without intellectual disability, and a healthy control group (n = 30) with four tasks of basic and complex emotion recognition through face and body movements, and two non-emotional control tasks. To analyze the dissociation between facial and body expressions, we used Crawford and Garthwaite's operational criteria, and we compared the patient and the control group performance with a modified one-tailed t-test designed specifically for single-case studies. There were no statistically significant differences between the patient's and the control group's performances on the non-emotional body movement task or the facial perception task. For both kinds of emotions (basic and complex) when the patient's performance was compared to the control group's, statistically significant differences were only observed for the recognition of body expressions. There were no significant differences between the patient's and the control group's correct answers for emotional facial stimuli. Our results showed a profile of impaired emotion recognition through body expressions and intact performance with facial expressions. This is the first case study that describes the existence of this kind of dissociation pattern between facial and body expressions of basic and complex emotions.

  4. MASS MOVEMENTS' DETECTION IN HIRISE IMAGES OF THE NORTH POLE OF MARS

    Directory of Open Access Journals (Sweden)

    L. Fanara

    2016-06-01

    Full Text Available We are investigating change detection techniques to automatically detect mass movements at the steep north polar scarps of Mars, in order to improve our understanding of these dynamic processes. Here we focus on movements of blocks specifically. The precise detection of such small changes requires an accurate co-registration of the images, which is achieved by ortho-rectifying them using High Resolution Imaging Science Experiment (HiRISE Digital Terrain Models (DTMs. Moreover, we deal with the challenge of deriving the true shape of the moved blocks. In a next step, these results are combined with findings based on HiRISE DTMs from different points in time in order to estimate the volume of mass movements.

  5. Oral contraceptives may alter the detection of emotions in facial expressions.

    Science.gov (United States)

    Hamstra, Danielle A; De Rover, Mischa; De Rijk, Roel H; Van der Does, Willem

    2014-11-01

    A possible effect of oral contraceptives on emotion recognition was observed in the context of a clinical trial with a corticosteroid. Users of oral contraceptives detected significantly fewer facial expressions of sadness, anger and disgust than non-users. This was true for trial participants overall as well as for those randomized to placebo. Although it is uncertain whether this is an effect of oral contraceptives or a pre-existing difference, future studies on the effect of interventions should control for the effects of oral contraceptives on emotional and cognitive outcomes. Copyright © 2014 Elsevier B.V. and ECNP. All rights reserved.

  6. Facial Expression Emotion Detection for Real-Time Embedded Systems

    Directory of Open Access Journals (Sweden)

    Saeed Turabzadeh

    2018-01-01

    Full Text Available Recently, real-time facial expression recognition has attracted more and more research. In this study, an automatic facial expression real-time system was built and tested. Firstly, the system and model were designed and tested on a MATLAB environment followed by a MATLAB Simulink environment that is capable of recognizing continuous facial expressions in real-time with a rate of 1 frame per second and that is implemented on a desktop PC. They have been evaluated in a public dataset, and the experimental results were promising. The dataset and labels used in this study were made from videos, which were recorded twice from five participants while watching a video. Secondly, in order to implement in real-time at a faster frame rate, the facial expression recognition system was built on the field-programmable gate array (FPGA. The camera sensor used in this work was a Digilent VmodCAM — stereo camera module. The model was built on the Atlys™ Spartan-6 FPGA development board. It can continuously perform emotional state recognition in real-time at a frame rate of 30. A graphical user interface was designed to display the participant’s video in real-time and two-dimensional predict labels of the emotion at the same time.

  7. Quantitative analysis of the TMJ movement with a new mandibular movement tracking and simulation system

    International Nuclear Information System (INIS)

    Kim, Dae Seung; Hwang, Soon Jung; Choi, Soon Chul; Lee, Sam Sun; Heo, Min Suk; Heo, Kyung Hoe; Yi, Won Jin

    2008-01-01

    The purpose of this study was to develop a system for the measurement and simulation of the TMJ movement and to analyze the mandibular movement quantitatively. We devised patient-specific splints and a registration body for the TMJ movement tracking. The mandibular movements of the 12 subjects with facial deformity and 3 controls were obtained by using an optical tracking system and the patient-specific splints. The mandibular part was manually segmented from the CT volume data of a patient. Three-dimensional surface models of the maxilla and the mandible were constructed using the segmented data. The continuous movement of the mandible with respect to the maxilla could be simulated by applying the recorded positions sequentially. Trajectories of the selected reference points were calculated during simulation and analyzed. The selected points were the most superior point of bilateral condyle, lower incisor point, and pogonion. There were significant differences (P<0.05) between control group and pre-surgical group in the maximum displacement of left superior condyle, lower incisor, and pogonion in vertical direction. Differences in the maximum lengths of the right and the left condyle were 0.59 ± 0.30 mm in pre-surgical group and 2.69 ± 2.63 mm in control group, which showed a significant difference (P<0.005). The maximum of differences between lengths of the right and the left calculated during one cycle also showed a significant difference between two groups (P<0.05). Significant differences in mandibular movements between the groups implies that facial deformity have an effect on the movement asymmetry of the mandible.

  8. Meta-Analysis of the First Facial Expression Recognition Challenge

    NARCIS (Netherlands)

    Valstar, M.F.; Mehu, M.; Jiang, Bihan; Pantic, Maja; Scherer, K.

    Automatic facial expression recognition has been an active topic in computer science for over two decades, in particular facial action coding system action unit (AU) detection and classification of a number of discrete emotion states from facial expressive imagery. Standardization and comparability

  9. Fetal movement detection based on QRS amplitude variations in abdominal ECG recordings.

    Science.gov (United States)

    Rooijakkers, M J; de Lau, H; Rabotti, C; Oei, S G; Bergmans, J W M; Mischi, M

    2014-01-01

    Evaluation of fetal motility can give insight in fetal health, as a strong decrease can be seen as a precursor to fetal death. Typically, the assessment of fetal health by fetal movement detection relies on the maternal perception of fetal activity. The percentage of detected movements is strongly subject dependent and with undivided attention of the mother varies between 37% to 88%. Various methods to assist in fetal movement detection exist based on a wide spectrum of measurement techniques. However, these are typically unsuitable for ambulatory or long-term observation. In this paper, a novel method for fetal motion detection is presented based on amplitude and shape changes in the abdominally recorded fetal ECG. The proposed method has a sensitivity and specificity of 0.67 and 0.90, respectively, outperforming alternative fetal ECG-based methods from the literature.

  10. ExpNet: Landmark-Free, Deep, 3D Facial Expressions

    OpenAIRE

    Chang, Feng-Ju; Tran, Anh Tuan; Hassner, Tal; Masi, Iacopo; Nevatia, Ram; Medioni, Gerard

    2018-01-01

    We describe a deep learning based method for estimating 3D facial expression coefficients. Unlike previous work, our process does not relay on facial landmark detection methods as a proxy step. Recent methods have shown that a CNN can be trained to regress accurate and discriminative 3D morphable model (3DMM) representations, directly from image intensities. By foregoing facial landmark detection, these methods were able to estimate shapes for occluded faces appearing in unprecedented in-the-...

  11. Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality.

    Science.gov (United States)

    Mehta, Dhwani; Siddiqui, Mohammad Faridul Haque; Javaid, Ahmad Y

    2018-02-01

    Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human-Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL) is introduced for observing emotion recognition in Augmented Reality (AR). A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam.

  12. Predicting facial characteristics from complex polygenic variations

    DEFF Research Database (Denmark)

    Fagertun, Jens; Wolffhechel, Karin Marie Brandt; Pers, Tune

    2015-01-01

    Research into the importance of the human genome in the context of facial appearance is receiving increasing attention and has led to the detection of several Single Nucleotide Polymorphisms (SNPs) of importance. In this work we attempt a holistic approach predicting facial characteristics from...... genetic principal components across a population of 1,266 individuals. For this we perform a genome-wide association analysis to select a large number of SNPs linked to specific facial traits, recode these to genetic principal components and then use these principal components as predictors for facial...

  13. Botulinum toxin treatment for facial palsy: A systematic review.

    Science.gov (United States)

    Cooper, Lilli; Lui, Michael; Nduka, Charles

    2017-06-01

    Facial palsy may be complicated by ipsilateral synkinesis or contralateral hyperkinesis. Botulinum toxin is increasingly used in the management of facial palsy; however, the optimum dose, treatment interval, adjunct therapy and performance as compared with alternative treatments have not been well established. This study aimed to systematically review the evidence for the use of botulinum toxin in facial palsy. The Cochrane central register of controlled trials (CENTRAL), MEDLINE(R) (1946 to September 2015) and Embase Classic + Embase (1947 to September 2015) were searched for randomised studies using botulinum toxin in facial palsy. Forty-seven studies were identified, and three included. Their physical and patient-reported outcomes are described, and observations and cautions are discussed. Facial asymmetry has a strong correlation to subjective domains such as impairment in social interaction and perception of self-image and appearance. Botulinum toxin injections represent a minimally invasive technique that is helpful in restoring facial symmetry at rest and during movement in chronic, and potentially acute, facial palsy. Botulinum toxin in combination with physical therapy may be particularly helpful. Currently, there is a paucity of data; areas for further research are suggested. A strong body of evidence may allow botulinum toxin treatment to be nationally standardised and recommended in the management of facial palsy. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  14. Laboratory Validation of Inertial Body Sensors to Detect Cigarette Smoking Arm Movements

    Directory of Open Access Journals (Sweden)

    Bethany R. Raiff

    2014-02-01

    Full Text Available Cigarette smoking remains the leading cause of preventable death in the United States. Traditional in-clinic cessation interventions may fail to intervene and interrupt the rapid progression to relapse that typically occurs following a quit attempt. The ability to detect actual smoking behavior in real-time is a measurement challenge for health behavior research and intervention. The successful detection of real-time smoking through mobile health (mHealth methodology has substantial implications for developing highly efficacious treatment interventions. The current study was aimed at further developing and testing the ability of inertial sensors to detect cigarette smoking arm movements among smokers. The current study involved four smokers who smoked six cigarettes each in a laboratory-based assessment. Participants were outfitted with four inertial body movement sensors on the arms, which were used to detect smoking events at two levels: the puff level and the cigarette level. Two different algorithms (Support Vector Machines (SVM and Edge-Detection based learning were trained to detect the features of arm movement sequences transmitted by the sensors that corresponded with each level. The results showed that performance of the SVM algorithm at the cigarette level exceeded detection at the individual puff level, with low rates of false positive puff detection. The current study is the second in a line of programmatic research demonstrating the proof-of-concept for sensor-based tracking of smoking, based on movements of the arm and wrist. This study demonstrates efficacy in a real-world clinical inpatient setting and is the first to provide a detection rate against direct observation, enabling calculation of true and false positive rates. The study results indicate that the approach performs very well with some participants, whereas some challenges remain with participants who generate more frequent non-smoking movements near the face. Future

  15. Multiple dental anomalies accompany unilateral disturbances in abducens and facial nerves: A case report

    Directory of Open Access Journals (Sweden)

    Elham Talatahari

    2016-01-01

    Full Text Available This article describes the oral rehabilitation of an 8-year-old girl with extensively affected primary and permanent dentition. This report is unique in which distinct dental anomalies including enamel hypoplasia, irregular dentin formation, taurodontism, hpodontia and dens in dente accompany unilateral disturbance of abducens and facial nerves which control the lateral eye movement, and facial expression, respectively.   Keywords: enamel hypoplasia; irregular dentin formation; taurodontism; hypodontia; dens in dente; abducens and facial nerves;

  16. Facial emotion recognition in Parkinson's disease: A review and new hypotheses

    Science.gov (United States)

    Vérin, Marc; Sauleau, Paul; Grandjean, Didier

    2018-01-01

    Abstract Parkinson's disease is a neurodegenerative disorder classically characterized by motor symptoms. Among them, hypomimia affects facial expressiveness and social communication and has a highly negative impact on patients' and relatives' quality of life. Patients also frequently experience nonmotor symptoms, including emotional‐processing impairments, leading to difficulty in recognizing emotions from faces. Aside from its theoretical importance, understanding the disruption of facial emotion recognition in PD is crucial for improving quality of life for both patients and caregivers, as this impairment is associated with heightened interpersonal difficulties. However, studies assessing abilities in recognizing facial emotions in PD still report contradictory outcomes. The origins of this inconsistency are unclear, and several questions (regarding the role of dopamine replacement therapy or the possible consequences of hypomimia) remain unanswered. We therefore undertook a fresh review of relevant articles focusing on facial emotion recognition in PD to deepen current understanding of this nonmotor feature, exploring multiple significant potential confounding factors, both clinical and methodological, and discussing probable pathophysiological mechanisms. This led us to examine recent proposals about the role of basal ganglia‐based circuits in emotion and to consider the involvement of facial mimicry in this deficit from the perspective of embodied simulation theory. We believe our findings will inform clinical practice and increase fundamental knowledge, particularly in relation to potential embodied emotion impairment in PD. © 2018 The Authors. Movement Disorders published by Wiley Periodicals, Inc. on behalf of International Parkinson and Movement Disorder Society. PMID:29473661

  17. Magnetic resonance imaging of facial nerve schwannoma.

    Science.gov (United States)

    Thompson, Andrew L; Aviv, Richard I; Chen, Joseph M; Nedzelski, Julian M; Yuen, Heng-Wai; Fox, Allan J; Bharatha, Aditya; Bartlett, Eric S; Symons, Sean P

    2009-12-01

    This study characterizes the magnetic resonance (MR) appearances of facial nerve schwannoma (FNS). We hypothesize that the extent of FNS demonstrated on MR will be greater compared to prior computed tomography studies, that geniculate involvement will be most common, and that cerebellar pontine angle (CPA) and internal auditory canal (IAC) involvement will more frequently result in sensorineural hearing loss (SNHL). Retrospective study. Clinical, pathologic, and enhanced MR imaging records of 30 patients with FNS were analyzed. Morphologic characteristics and extent of segmental facial nerve involvement were documented. Median age at initial imaging was 51 years (range, 28-76 years). Pathologic confirmation was obtained in 14 patients (47%), and the diagnosis reached in the remainder by identification of a mass, thickening, and enhancement along the course of the facial nerve. All 30 lesions involved two or more contiguous segments of the facial nerve, with 28 (93%) involving three or more segments. The median segments involved per lesion was 4, mean of 3.83. Geniculate involvement was most common, in 29 patients (97%). CPA (P = .001) and IAC (P = .02) involvement was significantly related to SNHL. Seventeen patients (57%) presented with facial nerve dysfunction, manifesting in 12 patients as facial nerve weakness or paralysis, and/or in eight with involuntary movements of the facial musculature. This study highlights the morphologic heterogeneity and typical multisegment involvement of FNS. Enhanced MR is the imaging modality of choice for FNS. The neuroradiologist must accurately diagnose and characterize this lesion, and thus facilitate optimal preoperative planning and counseling.

  18. Handedness of children determines preferential facial and eye movements related to hemispheric specialization La lateralidad manual determina la preferencia motora ocular y facial en relación con la especialización hemisférica en niños

    Directory of Open Access Journals (Sweden)

    Carmina Arteaga

    2008-09-01

    Full Text Available BACKGROUND: Despite repeated demonstrations of asymmetries in several brain functions, the biological bases of such asymmetries have remained obscure. OBJECTIVE: To investigate development of lateralized facial and eye movements evoked by hemispheric stimulation in right-handed and left-handed children. METHOD: Fifty children were tested according to handedness by means of four tests: I. Mono-syllabic non-sense words, II. Tri-syllabic sense words, III. Visual field occlusion by black wall, and presentation of geometric objects to both hands separately, IV. Left eye and the temporal half visual field of the right eye occlusion with special goggles, afterwards asking children to assemble a three-piece puzzle; same tasks were performed contra-laterally. RESULTS: Right-handed children showed higher percentage of eye movements to right side when stimulated by tri-syllabic words, while left-handed children shown higher percentages of eyes movements to left side when stimulated by the same type of words. Left-handed children spent more time in recognizing non-sense mono-syllabic words. Hand laterality correlated with tri-syllabic word recognition performance. Age contributed to laterality development in nearly all cases, except in second test. CONCLUSION: Eye and facial movements were found to be related to left- and right-hand preference and specialization for language development, as well as visual, haptic perception and recognition in an age-dependent fashion in a complex process.CONTEXTO: A pesar de las repetidas demostraciones de asimetría en varias funciones cerebrales, sus bases biológicas permanecen no bien conocidas aún. OBJECTIVO: Investigamos el desarrollo de la lateralización de los movimientos faciales y oculares provocados por la estimulación hemisférica preferencial en niños diestros y zurdos. MÉTODO: Se examinaron 50 niños que se dividieron de acuerdo a su lateralidad manual, se les aplicaron 4 pruebas: I. Discriminación de

  19. Intraparotid facial nerve schwannoma: Report of two cases

    Directory of Open Access Journals (Sweden)

    Seyyed Basir Hashemi

    2008-07-01

    Full Text Available Introduction: Intra parotid facial nerve schowannoma is a rare tumor. Case report: In this article we presented two cases of intra parotid facial nerve schowannoma. In two cases tumor presented with asymptomatic parotid mass that mimic pleomorphic adenoma. No preoperative facial nerve dysfunction in cases is detected. Diagnostic result and surgical management are discussed in this paper.  

  20. Unobtrusive multimodal emotion detection in adaptive interfaces: speech and facial expressions

    NARCIS (Netherlands)

    Truong, K.P.; Leeuwen, D.A. van; Neerincx, M.A.

    2007-01-01

    Two unobtrusive modalities for automatic emotion recognition are discussed: speech and facial expressions. First, an overview is given of emotion recognition studies based on a combination of speech and facial expressions. We will identify difficulties concerning data collection, data fusion, system

  1. Detection of patient movement during CBCT examination using video observation compared with an accelerometer-gyroscope tracking system.

    Science.gov (United States)

    Spin-Neto, Rubens; Matzen, Louise H; Schropp, Lars; Gotfredsen, Erik; Wenzel, Ann

    2017-02-01

    To compare video observation (VO) with a novel three-dimensional registration method, based on an accelerometer-gyroscope (AG) system, to detect patient movement during CBCT examination. The movements were further analyzed according to complexity and patient age. In 181 patients (118 females/63 males; age average 30 years, range: 9-84 years), 206 CBCT examinations were performed, which were video-recorded during examination. An AG was, at the same time, attached to the patient head to track head position in three dimensions. Three observers scored patient movement (yes/no) by VO. AG provided movement data on the x-, y- and z-axes. Thresholds for AG-based registration were defined at 0.5, 1, 2, 3 and 4 mm (movement distance). Movement detected by VO was compared with that registered by AG, according to movement complexity (uniplanar vs multiplanar, as defined by AG) and patient age (≤15, 16-30 and ≥31 years). According to AG, movement ≥0.5 mm was present in 160 (77.7%) examinations. According to VO, movement was present in 46 (22.3%) examinations. One VO-detected movement was not registered by AG. Overall, VO did not detect 71.9% of the movements registered by AG at the 0.5-mm threshold. At a movement distance ≥4 mm, 20% of the AG-registered movements were not detected by VO. Multiplanar movements such as lateral head rotation (72.1%) and nodding/swallowing (52.6%) were more often detected by VO in comparison with uniplanar movements, such as head lifting (33.6%) and anteroposterior translation (35.6%), at the 0.5-mm threshold. The prevalence of patients who move was highest in patients younger than 16 years (64.3% for VO and 92.3% for AG-based registration at the 0.5-mm threshold). AG-based movement registration resulted in a higher prevalence of patient movement during CBCT examination than VO-based registration. Also, AG-registered multiplanar movements were more frequently detected by VO than uniplanar movements. The prevalence of patients who move

  2. Detection of directional eye movements based on the electrooculogram signals through an artificial neural network

    International Nuclear Information System (INIS)

    Erkaymaz, Hande; Ozer, Mahmut; Orak, İlhami Muharrem

    2015-01-01

    The electrooculogram signals are very important at extracting information about detection of directional eye movements. Therefore, in this study, we propose a new intelligent detection model involving an artificial neural network for the eye movements based on the electrooculogram signals. In addition to conventional eye movements, our model also involves the detection of tic and blinking of an eye. We extract only two features from the electrooculogram signals, and use them as inputs for a feed-forwarded artificial neural network. We develop a new approach to compute these two features, which we call it as a movement range. The results suggest that the proposed model have a potential to become a new tool to determine the directional eye movements accurately

  3. Joint Facial Action Unit Detection and Feature Fusion: A Multi-Conditional Learning Approach

    NARCIS (Netherlands)

    Eleftheriadis, Stefanos; Rudovic, Ognjen; Pantic, Maja

    2016-01-01

    Automated analysis of facial expressions can benefit many domains, from marketing to clinical diagnosis of neurodevelopmental disorders. Facial expressions are typically encoded as a combination of facial muscle activations, i.e., action units. Depending on context, these action units co-occur in

  4. Speech-like orofacial oscillations in stump-tailed macaque (Macaca arctoides) facial and vocal signals.

    Science.gov (United States)

    Toyoda, Aru; Maruhashi, Tamaki; Malaivijitnond, Suchinda; Koda, Hiroki

    2017-10-01

    Speech is unique to humans and characterized by facial actions of ∼5 Hz oscillations of lip, mouth or jaw movements. Lip-smacking, a facial display of primates characterized by oscillatory actions involving the vertical opening and closing of the jaw and lips, exhibits stable 5-Hz oscillation patterns, matching that of speech, suggesting that lip-smacking is a precursor of speech. We tested if facial or vocal actions exhibiting the same rate of oscillation are found in wide forms of facial or vocal displays in various social contexts, exhibiting diversity among species. We observed facial and vocal actions of wild stump-tailed macaques (Macaca arctoides), and selected video clips including facial displays (teeth chattering; TC), panting calls, and feeding. Ten open-to-open mouth durations during TC and feeding and five amplitude peak-to-peak durations in panting were analyzed. Facial display (TC) and vocalization (panting) oscillated within 5.74 ± 1.19 and 6.71 ± 2.91 Hz, respectively, similar to the reported lip-smacking of long-tailed macaques and the speech of humans. These results indicated a common mechanism for the central pattern generator underlying orofacial movements, which would evolve to speech. Similar oscillations in panting, which evolved from different muscular control than the orofacial action, suggested the sensory foundations for perceptual saliency particular to 5-Hz rhythms in macaques. This supports the pre-adaptation hypothesis of speech evolution, which states a central pattern generator for 5-Hz facial oscillation and perceptual background tuned to 5-Hz actions existed in common ancestors of macaques and humans, before the emergence of speech. © 2017 Wiley Periodicals, Inc.

  5. Music and 25% glucose for preterm babies during the pre-procedure for arterial puncture: facial mimics emphasis

    Directory of Open Access Journals (Sweden)

    Maria Vera Lúcia Moreira Leitão Cardoso

    2016-06-01

    Full Text Available We aimed to describe and quantify facial mimic movements of preterm babies during music and 25% glucose interventions at the pre-procedure for arterial puncture. A randomized controlled trial involving 48 videos of preterm attended in a public neonatal unit, in Fortaleza – Ceará. We collected data from footage analyses during the pre-procedure. Babies heard a lullaby song for 10 minutes in the experimental group; we administered 25% glucose in the control group at the end of the eighth minute, matching a total of 10 minutes of observation. We assessed the frequency of facial expressions: cry, sneeze, yawn, frown the forehead, focused sight, vague sight, sleeping and head movement. Statistically significant variable for the control group: vague sight (p=0.001 at the two last minutes of observation. We concluded that there was no association between most of facial movements and the studied interventions, except for a vague sight in the control group.

  6. Overt foot movement detection in one single Laplacian EEG derivation.

    Science.gov (United States)

    Solis-Escalante, Teodoro; Müller-Putz, Gernot; Pfurtscheller, Gert

    2008-10-30

    In this work one single Laplacian derivation and a full description of band power values in a broad frequency band are used to detect brisk foot movement execution in the ongoing EEG. Two support vector machines (SVM) are trained to detect the event-related desynchronization (ERD) during motor execution and the following beta rebound (event-related synchronization, ERS) independently. Their performance is measured through the simulation of an asynchronous brain switch. ERS (true positive rate=0.74+/-0.21) after motor execution is shown to be more stable than ERD (true positive rate=0.21+/-0.12). A novel combination of ERD and post-movement ERS is introduced. The SVM outputs are combined with a product rule to merge ERD and ERS detection. For this novel approach the average information transfer rate obtained was 11.19+/-3.61bits/min.

  7. Detection and correction of patient movement in prostate brachytherapy seed reconstruction

    Science.gov (United States)

    Lam, Steve T.; Cho, Paul S.; Marks, Robert J., II; Narayanan, Sreeram

    2005-05-01

    Intraoperative dosimetry of prostate brachytherapy can help optimize the dose distribution and potentially improve clinical outcome. Evaluation of dose distribution during the seed implant procedure requires the knowledge of 3D seed coordinates. Fluoroscopy-based seed localization is a viable option. From three x-ray projections obtained at different gantry angles, 3D seed positions can be determined. However, when local anaesthesia is used for prostate brachytherapy, the patient movement during fluoroscopy image capture becomes a practical problem. If uncorrected, the errors introduced by patient motion between image captures would cause seed mismatches. Subsequently, the seed reconstruction algorithm would either fail to reconstruct or yield erroneous results. We have developed an algorithm that permits detection and correction of patient movement that may occur between fluoroscopy image captures. The patient movement is decomposed into translational shifts along the tabletop and rotation about an axis perpendicular to the tabletop. The property of spatial invariance of the co-planar imaging geometry is used for lateral movement correction. Cranio-caudal movement is corrected by analysing the perspective invariance along the x-ray axis. Rotation is estimated by an iterative method. The method can detect and correct for the range of patient movement commonly seen in the clinical environment. The algorithm has been implemented for routine clinical use as the preprocessing step for seed reconstruction.

  8. Hypoglossal-facial nerve "side"-to-side neurorrhaphy for facial paralysis resulting from closed temporal bone fractures.

    Science.gov (United States)

    Su, Diya; Li, Dezhi; Wang, Shiwei; Qiao, Hui; Li, Ping; Wang, Binbin; Wan, Hong; Schumacher, Michael; Liu, Song

    2018-06-06

    Closed temporal bone fractures due to cranial trauma often result in facial nerve injury, frequently inducing incomplete facial paralysis. Conventional hypoglossal-facial nerve end-to-end neurorrhaphy may not be suitable for these injuries because sacrifice of the lesioned facial nerve for neurorrhaphy destroys the remnant axons and/or potential spontaneous innervation. we modified the classical method by hypoglossal-facial nerve "side"-to-side neurorrhaphy using an interpositional predegenerated nerve graft to treat these injuries. Five patients who experienced facial paralysis resulting from closed temporal bone fractures due to cranial trauma were treated with the "side"-to-side neurorrhaphy. An additional 4 patients did not receive the neurorrhaphy and served as controls. Before treatment, all patients had suffered House-Brackmann (H-B) grade V or VI facial paralysis for a mean of 5 months. During the 12-30 months of follow-up period, no further detectable deficits were observed, but an improvement in facial nerve function was evidenced over time in the 5 neurorrhaphy-treated patients. At the end of follow-up, the improved facial function reached H-B grade II in 3, grade III in 1 and grade IV in 1 of the 5 patients, consistent with the electrophysiological examinations. In the control group, two patients showed slightly spontaneous innervation with facial function improved from H-B grade VI to V, and the other patients remained unchanged at H-B grade V or VI. We concluded that the hypoglossal-facial nerve "side"-to-side neurorrhaphy can preserve the injured facial nerve and is suitable for treating significant incomplete facial paralysis resulting from closed temporal bone fractures, providing an evident beneficial effect. Moreover, this treatment may be performed earlier after the onset of facial paralysis in order to reduce the unfavorable changes to the injured facial nerve and atrophy of its target muscles due to long-term denervation and allow axonal

  9. Central nervous system abnormalities on midline facial defects with hypertelorism detected by magnetic resonance image and computed tomography

    International Nuclear Information System (INIS)

    Lopes, Vera Lucia Gil da Silva; Giffoni, Silvio David Araujo

    2006-01-01

    The aim of this study were to describe and to compare structural central nervous system (CNS) anomalies detected by magnetic resonance image (MRI) and computed tomography (CT) in individuals affected by midline facial defects with hypertelorism (MFDH) isolated or associated with multiple congenital anomalies (MCA). The investigation protocol included dysmorphological examination, skull and facial X-rays, brain CT and/or MRI. We studied 24 individuals, 12 of them had an isolated form (Group I) and the others, MCA with unknown etiology (Group II). There was no significant difference between Group I and II and the results are presented in set. In addition to the several CNS anomalies previously described, MRI (n=18) was useful for detection of neuronal migration errors. These data suggested that structural CNS anomalies and MFDH seem to have an intrinsic embryological relationship, which should be taken in account during the clinical follow-up. (author)

  10. Quantitative Magnetic Resonance Imaging Volumetry of Facial Muscles in Healthy Patients with Facial Palsy

    Science.gov (United States)

    Volk, Gerd F.; Karamyan, Inna; Klingner, Carsten M.; Reichenbach, Jürgen R.

    2014-01-01

    Background: Magnetic resonance imaging (MRI) has not yet been established systematically to detect structural muscular changes after facial nerve lesion. The purpose of this pilot study was to investigate quantitative assessment of MRI muscle volume data for facial muscles. Methods: Ten healthy subjects and 5 patients with facial palsy were recruited. Using manual or semiautomatic segmentation of 3T MRI, volume measurements were performed for the frontal, procerus, risorius, corrugator supercilii, orbicularis oculi, nasalis, zygomaticus major, zygomaticus minor, levator labii superioris, orbicularis oris, depressor anguli oris, depressor labii inferioris, and mentalis, as well as for the masseter and temporalis as masticatory muscles for control. Results: All muscles except the frontal (identification in 4/10 volunteers), procerus (4/10), risorius (6/10), and zygomaticus minor (8/10) were identified in all volunteers. Sex or age effects were not seen (all P > 0.05). There was no facial asymmetry with exception of the zygomaticus major (larger on the left side; P = 0.012). The exploratory examination of 5 patients revealed considerably smaller muscle volumes on the palsy side 2 months after facial injury. One patient with chronic palsy showed substantial muscle volume decrease, which also occurred in another patient with incomplete chronic palsy restricted to the involved facial area. Facial nerve reconstruction led to mixed results of decreased but also increased muscle volumes on the palsy side compared with the healthy side. Conclusions: First systematic quantitative MRI volume measures of 5 different clinical presentations of facial paralysis are provided. PMID:25289366

  11. Dimensional Information-Theoretic Measurement of Facial Emotion Expressions in Schizophrenia

    Directory of Open Access Journals (Sweden)

    Jihun Hamm

    2014-01-01

    Full Text Available Altered facial expressions of emotions are characteristic impairments in schizophrenia. Ratings of affect have traditionally been limited to clinical rating scales and facial muscle movement analysis, which require extensive training and have limitations based on methodology and ecological validity. To improve reliable assessment of dynamic facial expression changes, we have developed automated measurements of facial emotion expressions based on information-theoretic measures of expressivity of ambiguity and distinctiveness of facial expressions. These measures were examined in matched groups of persons with schizophrenia (n=28 and healthy controls (n=26 who underwent video acquisition to assess expressivity of basic emotions (happiness, sadness, anger, fear, and disgust in evoked conditions. Persons with schizophrenia scored higher on ambiguity, the measure of conditional entropy within the expression of a single emotion, and they scored lower on distinctiveness, the measure of mutual information across expressions of different emotions. The automated measures compared favorably with observer-based ratings. This method can be applied for delineating dynamic emotional expressivity in healthy and clinical populations.

  12. More emotional facial expressions during episodic than during semantic autobiographical retrieval.

    Science.gov (United States)

    El Haj, Mohamad; Antoine, Pascal; Nandrino, Jean Louis

    2016-04-01

    There is a substantial body of research on the relationship between emotion and autobiographical memory. Using facial analysis software, our study addressed this relationship by investigating basic emotional facial expressions that may be detected during autobiographical recall. Participants were asked to retrieve 3 autobiographical memories, each of which was triggered by one of the following cue words: happy, sad, and city. The autobiographical recall was analyzed by a software for facial analysis that detects and classifies basic emotional expressions. Analyses showed that emotional cues triggered the corresponding basic facial expressions (i.e., happy facial expression for memories cued by happy). Furthermore, we dissociated episodic and semantic retrieval, observing more emotional facial expressions during episodic than during semantic retrieval, regardless of the emotional valence of cues. Our study provides insight into facial expressions that are associated with emotional autobiographical memory. It also highlights an ecological tool to reveal physiological changes that are associated with emotion and memory.

  13. Quality-Aware Estimation of Facial Landmarks in Video Sequences

    DEFF Research Database (Denmark)

    Haque, Mohammad Ahsanul; Nasrollahi, Kamal; Moeslund, Thomas B.

    2015-01-01

    Face alignment in video is a primitive step for facial image analysis. The accuracy of the alignment greatly depends on the quality of the face image in the video frames and low quality faces are proven to cause erroneous alignment. Thus, this paper proposes a system for quality aware face...... for facial landmark detection. If the face quality is low the proposed system corrects the facial landmarks that are detected by SDM. Depending upon the face velocity in consecutive video frames and face quality measure, two algorithms are proposed for correction of landmarks in low quality faces by using...

  14. Face processing regions are sensitive to distinct aspects of temporal sequence in facial dynamics.

    Science.gov (United States)

    Reinl, Maren; Bartels, Andreas

    2014-11-15

    Facial movement conveys important information for social interactions, yet its neural processing is poorly understood. Computational models propose that shape- and temporal sequence sensitive mechanisms interact in processing dynamic faces. While face processing regions are known to respond to facial movement, their sensitivity to particular temporal sequences has barely been studied. Here we used fMRI to examine the sensitivity of human face-processing regions to two aspects of directionality in facial movement trajectories. We presented genuine movie recordings of increasing and decreasing fear expressions, each of which were played in natural or reversed frame order. This two-by-two factorial design matched low-level visual properties, static content and motion energy within each factor, emotion-direction (increasing or decreasing emotion) and timeline (natural versus artificial). The results showed sensitivity for emotion-direction in FFA, which was timeline-dependent as it only occurred within the natural frame order, and sensitivity to timeline in the STS, which was emotion-direction-dependent as it only occurred for decreased fear. The occipital face area (OFA) was sensitive to the factor timeline. These findings reveal interacting temporal sequence sensitive mechanisms that are responsive to both ecological meaning and to prototypical unfolding of facial dynamics. These mechanisms are temporally directional, provide socially relevant information regarding emotional state or naturalness of behavior, and agree with predictions from modeling and predictive coding theory. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Management of synkinesis and asymmetry in facial nerve palsy: a review article.

    Science.gov (United States)

    Pourmomeny, Abbas Ali; Asadi, Sahar

    2014-10-01

    The important sequelae of facial nerve palsy are synkinesis, asymmetry, hypertension and contracture; all of which have psychosocial effects on patients. Synkinesis due to mal regeneration causes involuntary movements during a voluntary movement. Previous studies have advocated treatment using physiotherapy modalities alone or with exercise therapy, but no consensus exists on the optimal approach. Thus, this review summarizes clinical controlled studies in the management of synkinesis and asymmetry in facial nerve palsy. Case-controlled clinical studies of patients at the acute stage of injury were selected for this review article. Data were obtained from English-language databases from 1980 until mid-2013. Among 124 articles initially captured, six randomized controlled trials involving 269 patients were identified with appropriate inclusion criteria. The results of all these studies emphasized the benefit of exercise therapy. Four studies considered electromyogram (EMG) biofeedback to be effective through neuromuscular re-education. Synkinesis and inconsistency of facial muscles could be treated with educational exercise therapy. EMG biofeedback is a suitable tool for this exercise therapy.

  16. Management of Synkinesis and Asymmetry in Facial Nerve Palsy: A Review Article

    Directory of Open Access Journals (Sweden)

    Abbasali pourmomeny

    2014-10-01

    Full Text Available Introduction: The important sequelae of facial nerve palsy are synkinesis, asymmetry, hypertension and contracture; all of which have psychosocial effects on patients. Synkinesis due to mal regeneration causes involuntary movements during a voluntary movement. Previous studies have advocated treatment using physiotherapy modalities alone or with exercise therapy, but no consensus exists on the optimal approach. Thus, this review summarizes clinical controlled studies in the management of synkinesis and asymmetry in facial nerve palsy.   Materials and Methods: Case-controlled clinical studies of patients at the acute stage of injury were selected for this review article. Data were obtained from English-language databases from 1980 until mid-2013.   Results: Among 124 articles initially captured, six randomized controlled trials involving 269 patients were identified with appropriate inclusion criteria. The results of all these studies emphasized the benefit of exercise therapy. Four studies considered electromyogram (EMG biofeedback to be effective through neuromuscular re-education.   Conclusion:  Synkinesis and inconsistency of facial muscles could be treated with educational exercise therapy. EMG biofeedback is a suitable tool for this exercise therapy.

  17. Development of a detection system for head movement robust to illumination change at radiotherapy

    International Nuclear Information System (INIS)

    Yamakawa, Takuya; Ogawa, Koichi; Iyatomi, Hitoshi; Kunieda, Etsuo

    2010-01-01

    This study reports the development of a detection system for head movement at stereotactic radio-therapy of head tumors. In the system, the pattern matching algorithm is applied as follows. Regions of interest like the nose and right/ left ears, the objects of movement to be traced, are selected by GUI (graphical user interface) from pictures taken by 3 USB cameras (DC-NCR20U, Hanwha, Japan) set around the head on the supportive arms to make the template of standard position; the frame pictures (5 fps) inputted as the real-time monitor are matched to the template so that the actual movement can be detected by the distance between the template and collation points; and precision is improved by calculating mean square errors. Alarming is set when the movement exceeds the permissible range. At the actual clinical site, as the wrong detection of the movement occurs by illumination change caused by the gantry migration, infrared pictures are taken in place of the ordinary room light condition. This results in reduction of position errors from 16.7, 9.5 and 8.1 mm (the latter light condition) to 0.6, 0.3 and 0.2 mm (infrared), of the nose, right and left ears, respectively. Thus a detection system for head movement robust (error <1 mm) to illumination change at radio-therapy is established. (T.T.)

  18. Facial Emotion Recognition: A Survey and Real-World User Experiences in Mixed Reality

    Directory of Open Access Journals (Sweden)

    Dhwani Mehta

    2018-02-01

    Full Text Available Extensive possibilities of applications have made emotion recognition ineluctable and challenging in the field of computer science. The use of non-verbal cues such as gestures, body movement, and facial expressions convey the feeling and the feedback to the user. This discipline of Human–Computer Interaction places reliance on the algorithmic robustness and the sensitivity of the sensor to ameliorate the recognition. Sensors play a significant role in accurate detection by providing a very high-quality input, hence increasing the efficiency and the reliability of the system. Automatic recognition of human emotions would help in teaching social intelligence in the machines. This paper presents a brief study of the various approaches and the techniques of emotion recognition. The survey covers a succinct review of the databases that are considered as data sets for algorithms detecting the emotions by facial expressions. Later, mixed reality device Microsoft HoloLens (MHL is introduced for observing emotion recognition in Augmented Reality (AR. A brief introduction of its sensors, their application in emotion recognition and some preliminary results of emotion recognition using MHL are presented. The paper then concludes by comparing results of emotion recognition by the MHL and a regular webcam.

  19. Can We Distinguish Emotions from Faces? Investigation of Implicit and Explicit Processes of Peak Facial Expressions.

    Science.gov (United States)

    Xiao, Ruiqi; Li, Xianchun; Li, Lin; Wang, Yanmei

    2016-01-01

    Most previous studies on facial expression recognition have focused on the moderate emotions; to date, few studies have been conducted to investigate the explicit and implicit processes of peak emotions. In the current study, we used transiently peak intense expression images of athletes at the winning or losing point in competition as materials, and investigated the diagnosability of peak facial expressions at both implicit and explicit levels. In Experiment 1, participants were instructed to evaluate isolated faces, isolated bodies, and the face-body compounds, and eye-tracking movement was recorded. The results revealed that the isolated body and face-body congruent images were better recognized than isolated face and face-body incongruent images, indicating that the emotional information conveyed by facial cues was ambiguous, and the body cues influenced facial emotion recognition. Furthermore, eye movement records showed that the participants displayed distinct gaze patterns for the congruent and incongruent compounds. In Experiment 2A, the subliminal affective priming task was used, with faces as primes and bodies as targets, to investigate the unconscious emotion perception of peak facial expressions. The results showed that winning face prime facilitated reaction to winning body target, whereas losing face prime inhibited reaction to winning body target, suggesting that peak facial expressions could be perceived at the implicit level. In general, the results indicate that peak facial expressions cannot be consciously recognized but can be perceived at the unconscious level. In Experiment 2B, revised subliminal affective priming task and a strict awareness test were used to examine the validity of unconscious perception of peak facial expressions found in Experiment 2A. Results of Experiment 2B showed that reaction time to both winning body targets and losing body targets was influenced by the invisibly peak facial expression primes, which indicated the

  20. Detección facial y reconocimiento anímico mediante las expresiones faciales

    OpenAIRE

    BARTUAL GONZÁLEZ, RAQUEL

    2017-01-01

    The project is based on the software development elaborated with the program LabView. The mentioned program aims to detect people's faces in a video, as well as their genre and mood state. El proyecto se basa en el desarrollo de un software elaborado con el programa LabView. Dicho programa pretende detectar la cara de las personas en un vídeo, así como su género y su estado de ánimo. Bartual González, R. (2017). Detección facial y reconocimiento anímico mediante las expresiones faciales...

  1. 3D Facial Pattern Analysis for Autism

    Science.gov (United States)

    2010-07-01

    et al. (2001) proposed a two-level Garbor wavelet network (GWN) to detect eight facial features. In Bhuiyan et al. (2003) six facial features are...Toyama, K., Krüger, V., 2001. Hierarchical Wavelet Networks for Facial Feature Localization. ICCV’01 Workshop on Recognition, Analysis and... pathological  (red) and normal structure (blue) (b)  signed distance map (negative distance indicates the  pathological  shape is inside) (c) raw

  2. Involuntary movement during mastication in patients with long-term facial paralysis reanimated with a partial gracilis free neuromuscular flap innervated by the masseteric nerve.

    Science.gov (United States)

    Rozen, Shai; Harrison, Bridget

    2013-07-01

    Midface reanimation in patients with chronic facial paralysis is not always possible with an ipsilateral or contralateral facial nerve innervating a free neuromuscular tissue transfer. Alternate use of nonfacial nerves is occasionally indicated but may potentially result in inadvertent motions. The goal of this study was to objectively review videos of patients who underwent one-stage reanimation with a gracilis muscle transfer innervated by the masseteric nerve for (1) inadvertent motion during eating, (2) characterization of masticatory patterns, and (3) social hindrance perceived by the patients during meals. Between the years 2009 and 2012, 18 patients underwent midfacial reanimation with partial gracilis muscle transfer coapted to the masseter nerve for treatment of midfacial paralysis. Sixteen patients were videotaped in detail while eating. Involuntary midface movement on the reconstructed side and mastication patterns were assessed. In addition, 16 patients were surveyed as to whether involuntary motion constituted a problem in their daily lives. All 16 patients videotaped during mastication demonstrated involuntary motion on the reconstructed side while eating. Several unique masticatory patterns were noted as well. Only one of the 16 patients reported involuntary motion as a minor disturbance in daily life during meals. All patients with chronic facial paralysis who plan to undergo midface reanimation with a free tissue transfer innervated by the ipsilateral masseter nerve should be told that they would universally have involuntary animation during mastication. Most patients do not consider this a major drawback in their daily lives. Therapeutic, IV.

  3. Disección anatómica de la musculatura mímica facial: revisión iconográfica de apoyo a los tratamientos complementarios en rejuvenecimiento facial Anatomical dissection of the mimic facial musculature: iconographic review as a support to the complementary treatments in facial rejuvenation

    Directory of Open Access Journals (Sweden)

    C. Casado Sánchez

    2011-03-01

    Full Text Available A la hora de valorar las múltiples técnicas empleadas en el rejuvenecimiento facial y centrándonos de manera particular en aquellos procedimientos mínimamente invasivos complementarios a las intervenciones habituales en Cirugía Plástica-Estética, cobra especial relevancia el conocimiento exhaustivo de las estructuras musculares implicadas en la mímica facial. A tal efecto, se ha realizado un estudio anatómico en cadáveres frescos, en los que se han disecado las principales estructuras referidas. Se presenta un resumen iconográfico de los músculos faciales implicados, haciendo hincapié en su anatomía descriptiva y funcional, así como un recuerdo de las principales áreas problemáticas por alguna circunstancia especial (presencia de un nervio sensitivo o motor.To value the multiple technologies involved in facial rejuvenation and focusing in those minimally invasive complementary procedures to the usual Plastic and Aesthetic Surgeries, it´s very important the exhaustive knowledge of the muscular structures involved in the facial movements. To such an effect, an anatomical study has been realized in fresh corpses, dissecting the principal above-mentioned structures. We present an iconographic summary of the facial implied muscles, emphasizing in his descriptive and functional anatomy, as well as a recollection of the principal problematic areas for some special circumstance (presence of a sensory or motor nerve.

  4. Evaluation of a physiotherapeutic treatment intervention in "Bell's" facial palsy.

    Science.gov (United States)

    Cederwall, Elisabet; Olsén, Monika Fagevik; Hanner, Per; Fogdestam, Ingemar

    2006-01-01

    The aim of this study was to evaluate a physiotherapeutic treatment intervention in Bell's palsy. A consecutive series of nine patients with Bell's palsy participated in the study. The subjects were enrolled 4-21 weeks after the onset of facial paralysis. The study had a single subject experimental design with a baseline period of 2-6 weeks and a treatment period of 26-42 weeks. The patients were evaluated using a facial grading score, a paresis index and a written questionnaire created for this study. Every patient was taught to perform an exercise program twice daily, including movements of the muscles surrounding the mouth, nose, eyes and forehead. All the patients improved in terms of symmetry at rest, movement and function. In conclusion, patients with remaining symptoms of Bell's palsy appear to experience positive effects from a specific training program. A larger study, however, is needed to fully evaluate the treatment.

  5. Are there differential deficits in facial emotion recognition between paranoid and non-paranoid schizophrenia? A signal detection analysis.

    Science.gov (United States)

    Huang, Charles Lung-Cheng; Hsiao, Sigmund; Hwu, Hai-Gwo; Howng, Shen-Long

    2013-10-30

    This study assessed facial emotion recognition abilities in subjects with paranoid and non-paranoid schizophrenia (NPS) using signal detection theory. We explore the differential deficits in facial emotion recognition in 44 paranoid patients with schizophrenia (PS) and 30 non-paranoid patients with schizophrenia (NPS), compared to 80 healthy controls. We used morphed faces with different intensities of emotion and computed the sensitivity index (d') of each emotion. The results showed that performance differed between the schizophrenia and healthy controls groups in the recognition of both negative and positive affects. The PS group performed worse than the healthy controls group but better than the NPS group in overall performance. Performance differed between the NPS and healthy controls groups in the recognition of all basic emotions and neutral faces; between the PS and healthy controls groups in the recognition of angry faces; and between the PS and NPS groups in the recognition of happiness, anger, sadness, disgust, and neutral affects. The facial emotion recognition impairment in schizophrenia may reflect a generalized deficit rather than a negative-emotion specific deficit. The PS group performed worse than the control group, but better than the NPS group in facial expression recognition, with differential deficits between PS and NPS patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  6. Four not six: Revealing culturally common facial expressions of emotion.

    Science.gov (United States)

    Jack, Rachael E; Sun, Wei; Delis, Ioannis; Garrod, Oliver G B; Schyns, Philippe G

    2016-06-01

    As a highly social species, humans generate complex facial expressions to communicate a diverse range of emotions. Since Darwin's work, identifying among these complex patterns which are common across cultures and which are culture-specific has remained a central question in psychology, anthropology, philosophy, and more recently machine vision and social robotics. Classic approaches to addressing this question typically tested the cross-cultural recognition of theoretically motivated facial expressions representing 6 emotions, and reported universality. Yet, variable recognition accuracy across cultures suggests a narrower cross-cultural communication supported by sets of simpler expressive patterns embedded in more complex facial expressions. We explore this hypothesis by modeling the facial expressions of over 60 emotions across 2 cultures, and segregating out the latent expressive patterns. Using a multidisciplinary approach, we first map the conceptual organization of a broad spectrum of emotion words by building semantic networks in 2 cultures. For each emotion word in each culture, we then model and validate its corresponding dynamic facial expression, producing over 60 culturally valid facial expression models. We then apply to the pooled models a multivariate data reduction technique, revealing 4 latent and culturally common facial expression patterns that each communicates specific combinations of valence, arousal, and dominance. We then reveal the face movements that accentuate each latent expressive pattern to create complex facial expressions. Our data questions the widely held view that 6 facial expression patterns are universal, instead suggesting 4 latent expressive patterns with direct implications for emotion communication, social psychology, cognitive neuroscience, and social robotics. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Visual Working Memory Capacity for Emotional Facial Expressions

    Directory of Open Access Journals (Sweden)

    Domagoj Švegar

    2011-12-01

    Full Text Available The capacity of visual working memory is limited to no more than four items. At the same time, it is limited not only by the number of objects, but also by the total amount of information that needs to be memorized, and the relation between the information load per object and the number of objects that can be stored into visual working memory is inverse. The objective of the present experiment was to compute visual working memory capacity for emotional facial expressions, and in order to do so, change detection tasks were applied. Pictures of human emotional facial expressions were presented to 24 participants in 1008 experimental trials, each of which began with a presentation of a fixation mark, which was followed by a short simultaneous presentation of six emotional facial expressions. After that, a blank screen was presented, and after such inter-stimulus interval, one facial expression was presented at one of previously occupied locations. Participants had to answer if the facial expression presented at test is different or identical as the expression presented at that same location before the retention interval. Memory capacity was estimated through accuracy of responding, by the formula constructed by Pashler (1988, adopted from signal detection theory. It was found that visual working memory capacity for emotional facial expressions equals 3.07, which is high compared to capacity for facial identities and other visual stimuli. The obtained results were explained within the framework of evolutionary psychology.

  8. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2012-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions

  9. Automatic Facial Expression Recognition and Operator Functional State

    Science.gov (United States)

    Blanson, Nina

    2011-01-01

    The prevalence of human error in safety-critical occupations remains a major challenge to mission success despite increasing automation in control processes. Although various methods have been proposed to prevent incidences of human error, none of these have been developed to employ the detection and regulation of Operator Functional State (OFS), or the optimal condition of the operator while performing a task, in work environments due to drawbacks such as obtrusiveness and impracticality. A video-based system with the ability to infer an individual's emotional state from facial feature patterning mitigates some of the problems associated with other methods of detecting OFS, like obtrusiveness and impracticality in integration with the mission environment. This paper explores the utility of facial expression recognition as a technology for inferring OFS by first expounding on the intricacies of OFS and the scientific background behind emotion and its relationship with an individual's state. Then, descriptions of the feedback loop and the emotion protocols proposed for the facial recognition program are explained. A basic version of the facial expression recognition program uses Haar classifiers and OpenCV libraries to automatically locate key facial landmarks during a live video stream. Various methods of creating facial expression recognition software are reviewed to guide future extensions of the program. The paper concludes with an examination of the steps necessary in the research of emotion and recommendations for the creation of an automatic facial expression recognition program for use in real-time, safety-critical missions.

  10. The localization of facial motor impairment in sporadic Möbius syndrome.

    Science.gov (United States)

    Cattaneo, L; Chierici, E; Bianchi, B; Sesenna, E; Pavesi, G

    2006-06-27

    To investigate the neurophysiologic aspects of facial motor control in patients with sporadic Möbius syndrome defined as nonprogressive congenital facial and abducens palsy. The authors assessed 24 patients with sporadic Möbius syndrome by performing a complete clinical examination and neurophysiologic tests including facial nerve conduction studies, needle electromyography examination of facial muscles, and recording of the blink reflex and of the trigeminofacial inhibitory reflex. Two distinct groups of patients were identified according to neurophysiologic testing. The first group was characterized by increased facial distal motor latencies (DMLs) and poor recruitment of small and polyphasic motor unit action potentials (MUAPs). The second group was characterized by normal facial DMLs and neuropathic MUAPs. It is hypothesized that in the first group, the disorder is due to a rhombencephalic maldevelopment with selective sparing of small-size MUs, and in the second group, the disorder is related to an acquired nervous injury during intrauterine life, with subsequent neurogenic remodeling of MUs. The trigeminofacial reflexes showed that in most subjects of both groups, the functional impairment of facial movements was caused by a nuclear or peripheral site of lesion, with little evidence of brainstem interneuronal involvement. Two different neurophysiologically defined phenotypes can be distinguished in sporadic Möbius syndrome, with different pathogenetic implications.

  11. Recognizing Uncommon Presentations of Psychogenic (Functional Movement Disorders

    Directory of Open Access Journals (Sweden)

    José Fidel Baizabal-Carvallo

    2015-01-01

    Full Text Available Background: Psychogenic or functional movement disorders (PMDs pose a challenge in clinical diagnosis. There are several clues, including sudden onset, incongruous symptoms, distractibility, suggestibility, entrainment of symptoms, and lack of response to otherwise effective pharmacological therapies, that help identify the most common psychogenic movements such as tremor, dystonia, and myoclonus.Methods: In this manuscript, we review the frequency, distinct clinical features, functional imaging, and neurophysiological tests that can help in the diagnosis of uncommon presentations of PMDs, such as psychogenic parkinsonism, tics, and chorea; facial, palatal, and ocular movements are also reviewed. In addition, we discuss PMDs at the extremes of age and mass psychogenic illness.Results: Psychogenic parkinsonism (PP is observed in less than 10% of the case series about PMDs, with a female–male ratio of roughly 1:1. Lack of amplitude decrement in repetitive movements and of cogwheel rigidity help to differentiate PP from true parkinsonism. Dopamine transporter imaging with photon emission tomography can also help in the diagnostic process. Psychogenic movements resembling tics are reported in about 5% of PMD patients. Lack of transient suppressibility of abnormal movements helps to differentiate them from organic tics. Psychogenic facial movements can present with hemifacial spasm, blepharospasm, and other movements. Some patients with essential palatal tremor have been shown to be psychogenic. Convergence ocular spasm has demonstrated a high specificity for psychogenic movements. PMDs can also present in the context of mass psychogenic illness or at the extremes of age.Discussion: Clinical features and ancillary studies are helpful in the diagnosis of patients with uncommon presentations of psychogenic movement disorders.

  12. Spontaneous facial expressions of emotion of congenitally and noncongenitally blind individuals.

    Science.gov (United States)

    Matsumoto, David; Willingham, Bob

    2009-01-01

    The study of the spontaneous expressions of blind individuals offers a unique opportunity to understand basic processes concerning the emergence and source of facial expressions of emotion. In this study, the authors compared the expressions of congenitally and noncongenitally blind athletes in the 2004 Paralympic Games with each other and with those produced by sighted athletes in the 2004 Olympic Games. The authors also examined how expressions change from 1 context to another. There were no differences between congenitally blind, noncongenitally blind, and sighted athletes, either on the level of individual facial actions or in facial emotion configurations. Blind athletes did produce more overall facial activity, but these were isolated to head and eye movements. The blind athletes' expressions differentiated whether they had won or lost a medal match at 3 different points in time, and there were no cultural differences in expression. These findings provide compelling evidence that the production of spontaneous facial expressions of emotion is not dependent on observational learning but simultaneously demonstrates a learned component to the social management of expressions, even among blind individuals.

  13. Paraneoplastic autoimmune movement disorders.

    Science.gov (United States)

    Lim, Thien Thien

    2017-11-01

    To provide an overview of paraneoplastic autoimmune disorders presenting with various movement disorders. The spectrum of paraneoplastic autoimmune disorders has been expanding with the discovery of new antibodies against cell surface and intracellular antigens. Many of these paraneoplastic autoimmune disorders manifest as a form of movement disorder. With the discovery of new neuronal antibodies, an increasing number of idiopathic or neurodegenerative movement disorders are now being reclassified as immune-mediated movement disorders. These include anti-N-methyl-d-aspartate receptor (NMDAR) encephalitis which may present with orolingual facial dyskinesia and stereotyped movements, CRMP-5 IgG presenting with chorea, anti-Yo paraneoplastic cerebellar degeneration presenting with ataxia, anti-VGKC complex (Caspr2 antibodies) neuromyotonia, opsoclonus-myoclonus-ataxia syndrome, and muscle rigidity and episodic spasms (amphiphysin, glutamic acid decarboxylase, glycine receptor, GABA(A)-receptor associated protein antibodies) in stiff-person syndrome. Movement disorders may be a presentation for paraneoplastic autoimmune disorders. Recognition of these disorders and their common phenomenology is important because it may lead to the discovery of an occult malignancy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Eye tracking reveals a crucial role for facial motion in recognition of faces by infants.

    Science.gov (United States)

    Xiao, Naiqi G; Quinn, Paul C; Liu, Shaoying; Ge, Liezhong; Pascalis, Olivier; Lee, Kang

    2015-06-01

    Current knowledge about face processing in infancy comes largely from studies using static face stimuli, but faces that infants see in the real world are mostly moving ones. To bridge this gap, 3-, 6-, and 9-month-old Asian infants (N = 118) were familiarized with either moving or static Asian female faces, and then their face recognition was tested with static face images. Eye-tracking methodology was used to record eye movements during the familiarization and test phases. The results showed a developmental change in eye movement patterns, but only for the moving faces. In addition, the more infants shifted their fixations across facial regions, the better their face recognition was, but only for the moving faces. The results suggest that facial movement influences the way faces are encoded from early in development. (c) 2015 APA, all rights reserved).

  15. Detection of cortical activities on eye movement using functional magnetic resonance imaging

    International Nuclear Information System (INIS)

    Yoshida, Masaki; Kawai, Kazushige; Kitahara, Kenji; Soulie, D.; Cordoliani, Y.S.; Iba-Zizen, M.T.; Cabanis, E.A.

    1997-01-01

    Cortical activity during eye movement was examined with functional magnetic resonance imaging. Horizontal saccadic eye movements and smooth pursuit eye movements were elicited in normal subjects. Activity in the frontal eye field was found during both saccadic and smooth pursuit eye movements at the posterior margin of the middle frontal gyrus and in parts of the precentral sulcus and precentral gyrus bordering the middle frontal gyrus (Brodmann's areas 8, 6, and 9). In addition, activity in the parietal eye field was found in the deep, upper margin of the angular gyrus and of the supramarginal gyrus (Brodmann's areas 39 and 40) during saccadic eye movement. Activity of V5 was found at the intersection of the ascending limb of the inferior temporal sulcus and the lateral occipital sulcus during smooth pursuit eye movement. Our results suggest that functional magnetic resonance imaging is useful for detecting cortical activity during eye movement. (author)

  16. The identification of unfolding facial expressions.

    Science.gov (United States)

    Fiorentini, Chiara; Schmidt, Susanna; Viviani, Paolo

    2012-01-01

    We asked whether the identification of emotional facial expressions (FEs) involves the simultaneous perception of the facial configuration or the detection of emotion-specific diagnostic cues. We recorded at high speed (500 frames s-1) the unfolding of the FE in five actors, each expressing six emotions (anger, surprise, happiness, disgust, fear, sadness). Recordings were coded every 10 frames (20 ms of real time) with the Facial Action Coding System (FACS, Ekman et al 2002, Salt Lake City, UT: Research Nexus eBook) to identify the facial actions contributing to each expression, and their intensity changes over time. Recordings were shown in slow motion (1/20 of recording speed) to one hundred observers in a forced-choice identification task. Participants were asked to identify the emotion during the presentation as soon as they felt confident to do so. Responses were recorded along with the associated response times (RTs). The RT probability density functions for both correct and incorrect responses were correlated with the facial activity during the presentation. There were systematic correlations between facial activities, response probabilities, and RT peaks, and significant differences in RT distributions for correct and incorrect answers. The results show that a reliable response is possible long before the full FE configuration is reached. This suggests that identification is reached by integrating in time individual diagnostic facial actions, and does not require perceiving the full apex configuration.

  17. Space-by-time manifold representation of dynamic facial expressions for emotion categorization

    Science.gov (United States)

    Delis, Ioannis; Chen, Chaona; Jack, Rachael E.; Garrod, Oliver G. B.; Panzeri, Stefano; Schyns, Philippe G.

    2016-01-01

    Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism—termed space-by-time manifold decomposition—that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected “other.” Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions. PMID:27305521

  18. Sound-induced facial synkinesis following facial nerve paralysis

    NARCIS (Netherlands)

    Ma, Ming-San; van der Hoeven, Johannes H.; Nicolai, Jean-Philippe A.; Meek, Marcel F.

    Facial synkinesis (or synkinesia) (FS) occurs frequently after paresis or paralysis of the facial nerve and is in most cases due to aberrant regeneration of (branches of) the facial nerve. Patients suffer from inappropriate and involuntary synchronous facial muscle contractions. Here we describe two

  19. Extra Facial Landmark Localization via Global Shape Reconstruction

    Directory of Open Access Journals (Sweden)

    Shuqiu Tan

    2017-01-01

    Full Text Available Localizing facial landmarks is a popular topic in the field of face analysis. However, problems arose in practical applications such as handling pose variations and partial occlusions while maintaining moderate training model size and computational efficiency still challenges current solutions. In this paper, we present a global shape reconstruction method for locating extra facial landmarks comparing to facial landmarks used in the training phase. In the proposed method, the reduced configuration of facial landmarks is first decomposed into corresponding sparse coefficients. Then explicit face shape correlations are exploited to regress between sparse coefficients of different facial landmark configurations. Finally extra facial landmarks are reconstructed by combining the pretrained shape dictionary and the approximation of sparse coefficients. By applying the proposed method, both the training time and the model size of a class of methods which stack local evidences as an appearance descriptor can be scaled down with only a minor compromise in detection accuracy. Extensive experiments prove that the proposed method is feasible and is able to reconstruct extra facial landmarks even under very asymmetrical face poses.

  20. Detection of cortical activities on eye movement using functional magnetic resonance imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yoshida, Masaki; Kawai, Kazushige; Kitahara, Kenji [Jikei Univ., Tokyo (Japan). School of Medicine; Soulie, D.; Cordoliani, Y.S.; Iba-Zizen, M.T.; Cabanis, E.A.

    1997-11-01

    Cortical activity during eye movement was examined with functional magnetic resonance imaging. Horizontal saccadic eye movements and smooth pursuit eye movements were elicited in normal subjects. Activity in the frontal eye field was found during both saccadic and smooth pursuit eye movements at the posterior margin of the middle frontal gyrus and in parts of the precentral sulcus and precentral gyrus bordering the middle frontal gyrus (Brodmann`s areas 8, 6, and 9). In addition, activity in the parietal eye field was found in the deep, upper margin of the angular gyrus and of the supramarginal gyrus (Brodmann`s areas 39 and 40) during saccadic eye movement. Activity of V5 was found at the intersection of the ascending limb of the inferior temporal sulcus and the lateral occipital sulcus during smooth pursuit eye movement. Our results suggest that functional magnetic resonance imaging is useful for detecting cortical activity during eye movement. (author)

  1. [Facial palsy].

    Science.gov (United States)

    Cavoy, R

    2013-09-01

    Facial palsy is a daily challenge for the clinicians. Determining whether facial nerve palsy is peripheral or central is a key step in the diagnosis. Central nervous lesions can give facial palsy which may be easily differentiated from peripheral palsy. The next question is the peripheral facial paralysis idiopathic or symptomatic. A good knowledge of anatomy of facial nerve is helpful. A structure approach is given to identify additional features that distinguish symptomatic facial palsy from idiopathic one. The main cause of peripheral facial palsies is idiopathic one, or Bell's palsy, which remains a diagnosis of exclusion. The most common cause of symptomatic peripheral facial palsy is Ramsay-Hunt syndrome. Early identification of symptomatic facial palsy is important because of often worst outcome and different management. The prognosis of Bell's palsy is on the whole favorable and is improved with a prompt tapering course of prednisone. In Ramsay-Hunt syndrome, an antiviral therapy is added along with prednisone. We also discussed of current treatment recommendations. We will review short and long term complications of peripheral facial palsy.

  2. Sound-induced facial synkinesis following facial nerve paralysis.

    Science.gov (United States)

    Ma, Ming-San; van der Hoeven, Johannes H; Nicolai, Jean-Philippe A; Meek, Marcel F

    2009-08-01

    Facial synkinesis (or synkinesia) (FS) occurs frequently after paresis or paralysis of the facial nerve and is in most cases due to aberrant regeneration of (branches of) the facial nerve. Patients suffer from inappropriate and involuntary synchronous facial muscle contractions. Here we describe two cases of sound-induced facial synkinesis (SFS) after facial nerve injury. As far as we know, this phenomenon has not been described in the English literature before. Patient A presented with right hemifacial palsy after lesion of the facial nerve due to skull base fracture. He reported involuntary muscle activity at the right corner of the mouth, specifically on hearing ringing keys. Patient B suffered from left hemifacial palsy following otitis media and developed involuntary muscle contraction in the facial musculature specifically on hearing clapping hands or a trumpet sound. Both patients were evaluated by means of video, audio and EMG analysis. Possible mechanisms in the pathophysiology of SFS are postulated and therapeutic options are discussed.

  3. Decoding facial expressions based on face-selective and motion-sensitive areas.

    Science.gov (United States)

    Liang, Yin; Liu, Baolin; Xu, Junhai; Zhang, Gaoyan; Li, Xianglin; Wang, Peiyuan; Wang, Bin

    2017-06-01

    Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  4. Facilitation of facial nerve regeneration using chitosan-β-glycerophosphate-nerve growth factor hydrogel.

    Science.gov (United States)

    Chao, Xiuhua; Xu, Lei; Li, Jianfeng; Han, Yuechen; Li, Xiaofei; Mao, YanYan; Shang, Haiqiong; Fan, Zhaomin; Wang, Haibo

    2016-06-01

    Conclusion C/GP hydrogel was demonstrated to be an ideal drug delivery vehicle and scaffold in the vein conduit. Combined use autologous vein and NGF continuously delivered by C/GP-NGF hydrogel can improve the recovery of facial nerve defects. Objective This study investigated the effects of chitosan-β-glycerophosphate-nerve growth factor (C/GP-NGF) hydrogel combined with autologous vein conduit on the recovery of damaged facial nerve in a rat model. Methods A 5 mm gap in the buccal branch of a rat facial nerve was reconstructed with an autologous vein. Next, C/GP-NGF hydrogel was injected into the vein conduit. In negative control groups, NGF solution or phosphate-buffered saline (PBS) was injected into the vein conduits, respectively. Autologous implantation was used as a positive control group. Vibrissae movement, electrophysiological assessment, and morphological analysis of regenerated nerves were performed to assess nerve regeneration. Results NGF continuously released from C/GP-NGF hydrogel in vitro. The recovery rate of vibrissae movement and the compound muscle action potentials of regenerated facial nerve in the C/GP-NGF group were similar to those in the Auto group, and significantly better than those in the NGF group. Furthermore, larger regenerated axons and thicker myelin sheaths were obtained in the C/GP-NGF group than those in the NGF group.

  5. Peripheral facial weakness (Bell's palsy).

    Science.gov (United States)

    Basić-Kes, Vanja; Dobrota, Vesna Dermanović; Cesarik, Marijan; Matovina, Lucija Zadro; Madzar, Zrinko; Zavoreo, Iris; Demarin, Vida

    2013-06-01

    Peripheral facial weakness is a facial nerve damage that results in muscle weakness on one side of the face. It may be idiopathic (Bell's palsy) or may have a detectable cause. Almost 80% of peripheral facial weakness cases are primary and the rest of them are secondary. The most frequent causes of secondary peripheral facial weakness are systemic viral infections, trauma, surgery, diabetes, local infections, tumor, immune disorders, drugs, degenerative diseases of the central nervous system, etc. The diagnosis relies upon the presence of typical signs and symptoms, blood chemistry tests, cerebrospinal fluid investigations, nerve conduction studies and neuroimaging methods (cerebral MRI, x-ray of the skull and mastoid). Treatment of secondary peripheral facial weakness is based on therapy for the underlying disorder, unlike the treatment of Bell's palsy that is controversial due to the lack of large, randomized, controlled, prospective studies. There are some indications that steroids or antiviral agents are beneficial but there are also studies that show no beneficial effect. Additional treatments include eye protection, physiotherapy, acupuncture, botulinum toxin, or surgery. Bell's palsy has a benign prognosis with complete recovery in about 80% of patients, 15% experience some mode of permanent nerve damage and severe consequences remain in 5% of patients.

  6. Facial dynamics and emotional expressions in facial aging treatments.

    Science.gov (United States)

    Michaud, Thierry; Gassia, Véronique; Belhaouari, Lakhdar

    2015-03-01

    Facial expressions convey emotions that form the foundation of interpersonal relationships, and many of these emotions promote and regulate our social linkages. Hence, the facial aging symptomatological analysis and the treatment plan must of necessity include knowledge of the facial dynamics and the emotional expressions of the face. This approach aims to more closely meet patients' expectations of natural-looking results, by correcting age-related negative expressions while observing the emotional language of the face. This article will successively describe patients' expectations, the role of facial expressions in relational dynamics, the relationship between facial structures and facial expressions, and the way facial aging mimics negative expressions. Eventually, therapeutic implications for facial aging treatment will be addressed. © 2015 Wiley Periodicals, Inc.

  7. An overview of the cosmetic treatment of facial muscles with a new botulinum toxin.

    Science.gov (United States)

    Wiest, Luitgard G

    2009-01-01

    Botulinum toxin (BTX) is used nowadays in a much more differentiated way with a much more individualized approach to the cosmetic treatment of patients. To the well known areas of the upper face new indications in the mid and lower face have been added. Microinjection techniques are increasingly used besides the classic intramuscular injection technique. BTX injections of the mid and lower face require small and smallest dosages. The perioral muscles act in concert to achieve the extraordinarily complex movements that control facial expressions, eating, and speech. As the mouth has horizontal as well as vertical movements, paralysis of these perioral muscles has a greater effect on facial function and appearance than does paralysis of muscles of the upper face, which move primarily in vertical direction. It is essential that BTX injections should achieve the desired cosmetic result with the minimum dose without any functional discomfort. In this paper the three-year clinical experience with average dosages for an optimal outcome in the treatment of facial muscles with a newly developed botulinum toxin type A (Xeomin) free from complexing proteins is presented.

  8. Tactile Stimulation of the Face and the Production of Facial Expressions Activate Neurons in the Primate Amygdala.

    Science.gov (United States)

    Mosher, Clayton P; Zimmerman, Prisca E; Fuglevand, Andrew J; Gothard, Katalin M

    2016-01-01

    The majority of neurophysiological studies that have explored the role of the primate amygdala in the evaluation of social signals have relied on visual stimuli such as images of facial expressions. Vision, however, is not the only sensory modality that carries social signals. Both humans and nonhuman primates exchange emotionally meaningful social signals through touch. Indeed, social grooming in nonhuman primates and caressing touch in humans is critical for building lasting and reassuring social bonds. To determine the role of the amygdala in processing touch, we recorded the responses of single neurons in the macaque amygdala while we applied tactile stimuli to the face. We found that one-third of the recorded neurons responded to tactile stimulation. Although we recorded exclusively from the right amygdala, the receptive fields of 98% of the neurons were bilateral. A fraction of these tactile neurons were monitored during the production of facial expressions and during facial movements elicited occasionally by touch stimuli. Firing rates arising during the production of facial expressions were similar to those elicited by tactile stimulation. In a subset of cells, combining tactile stimulation with facial movement further augmented the firing rates. This suggests that tactile neurons in the amygdala receive input from skin mechanoceptors that are activated by touch and by compressions and stretches of the facial skin during the contraction of the underlying muscles. Tactile neurons in the amygdala may play a role in extracting the valence of touch stimuli and/or monitoring the facial expressions of self during social interactions.

  9. Facial nerve paralysis associated with temporal bone masses.

    Science.gov (United States)

    Nishijima, Hironobu; Kondo, Kenji; Kagoya, Ryoji; Iwamura, Hitoshi; Yasuhara, Kazuo; Yamasoba, Tatsuya

    2017-10-01

    To investigate the clinical and electrophysiological features of facial nerve paralysis (FNP) due to benign temporal bone masses (TBMs) and elucidate its differences as compared with Bell's palsy. FNP assessed by the House-Brackmann (HB) grading system and by electroneurography (ENoG) were compared retrospectively. We reviewed 914 patient records and identified 31 patients with FNP due to benign TBMs. Moderate FNP (HB Grades II-IV) was dominant for facial nerve schwannoma (FNS) (n=15), whereas severe FNP (Grades V and VI) was dominant for cholesteatomas (n=8) and hemangiomas (n=3). The average ENoG value was 19.8% for FNS, 15.6% for cholesteatoma, and 0% for hemangioma. Analysis of the correlation between HB grade and ENoG value for FNP due to TBMs and Bell's palsy revealed that given the same ENoG value, the corresponding HB grade was better for FNS, followed by cholesteatoma, and worst in Bell's palsy. Facial nerve damage caused by benign TBMs could depend on the underlying pathology. Facial movement and ENoG values did not correlate when comparing TBMs and Bell's palsy. When the HB grade is found to be unexpectedly better than the ENoG value, TBMs should be included in the differential diagnosis. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Rapid eye movement sleep behavior disorder as an outlier detection problem

    DEFF Research Database (Denmark)

    Kempfner, Jacob; Sørensen, Gertrud Laura; Nikolic, M.

    2014-01-01

    OBJECTIVE: Idiopathic rapid eye movement (REM) sleep behavior disorder is a strong early marker of Parkinson's disease and is characterized by REM sleep without atonia and/or dream enactment. Because these measures are subject to individual interpretation, there is consequently need...... for quantitative methods to establish objective criteria. This study proposes a semiautomatic algorithm for the early detection of Parkinson's disease. This is achieved by distinguishing between normal REM sleep and REM sleep without atonia by considering muscle activity as an outlier detection problem. METHODS......: Sixteen healthy control subjects, 16 subjects with idiopathic REM sleep behavior disorder, and 16 subjects with periodic limb movement disorder were enrolled. Different combinations of five surface electromyographic channels, including the EOG, were tested. A muscle activity score was automatically...

  11. Multiracial Facial Golden Ratio and Evaluation of Facial Appearance.

    Directory of Open Access Journals (Sweden)

    Mohammad Khursheed Alam

    Full Text Available This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian, with the mean age of 21.54 ± 1.56 (Age range, 18-25. Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI, Malaysian Chinese (MC and Malaysian Malay (MM were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; P<0.05 but no significant difference was found between races. Out of the 286 subjects, 49 (17.1% were of ideal facial shape, 156 (54.5% short and 81 (28.3% long. The facial evaluation questionnaire showed that MC had the lowest satisfaction with mean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts.1 Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%; 2 Facial index did not depend significantly on races; 3 Significant sexual dimorphism was shown among Malaysian Chinese; 4 All three races are generally satisfied with their own facial appearance; 5 No significant association was found between golden ratio and facial evaluation score among Malaysian population.

  12. Facial Expression Recognition using Multiclass Ensemble Least-Square Support Vector Machine

    Science.gov (United States)

    Lawi, Armin; Sya'Rani Machrizzandi, M.

    2018-03-01

    Facial expression is one of behavior characteristics of human-being. The use of biometrics technology system with facial expression characteristics makes it possible to recognize a person’s mood or emotion. The basic components of facial expression analysis system are face detection, face image extraction, facial classification and facial expressions recognition. This paper uses Principal Component Analysis (PCA) algorithm to extract facial features with expression parameters, i.e., happy, sad, neutral, angry, fear, and disgusted. Then Multiclass Ensemble Least-Squares Support Vector Machine (MELS-SVM) is used for the classification process of facial expression. The result of MELS-SVM model obtained from our 185 different expression images of 10 persons showed high accuracy level of 99.998% using RBF kernel.

  13. Caracterização funcional da mímica facial na paralisia facial em trauma de face: relato de caso clínico Functional characterization of facial mimicry in facial paralysis of face trauma: a clinical case report

    Directory of Open Access Journals (Sweden)

    Leila Bonfim de Jesus

    2012-10-01

    analysis was held through the anamnesis and graduation scale of House and Brackmann's facial paralysis. RESULTS: in the evaluation of the facial paralysis, in a resting state, we found on the right side (the injured one: diversion of lip commissure, diversion of the filter, more elevated nostril and more open eye. In movement, yet on the side of the injury, it was observed: elimination of frontal wrinkles , incompetence in the ocular closure and in the complete closure , absence of elevation of the nostril , a more pronounced nasolip rhyme, lip protrusion diverged to this side , little lip retraction , destruction of the inferior lip , elevated lip commissure , diversion of the filter, reduced capacity of inflating the cheeks. The patient presented synkinesia of eyes / lips and contraction with hypertonia of frontal, procerus, lifter of the nose's wing, risorius, higher zygomatic, lower zygomatic, lifter of superior lip, depressive of inferior lip, mentalis on the side of the lesion and the fracture happened on the right condyle and the patient reported orofacial pain when sleeping and chewing on the injured side. CONCLUSION: the lesion of the facial nerve that was associated with the face trauma provoked the alteration of the facial mimicry on the right side and generated disfiguration and disturbances in the chewing act.

  14. Targeted presurgical decompensation in patients with yaw-dependent facial asymmetry.

    Science.gov (United States)

    Kim, Kyung-A; Lee, Ji-Won; Park, Jeong-Ho; Kim, Byoung-Ho; Ahn, Hyo-Won; Kim, Su-Jung

    2017-05-01

    Facial asymmetry can be classified into the rolling-dominant type (R-type), translation-dominant type (T-type), yawing-dominant type (Y-type), and atypical type (A-type) based on the distorted skeletal components that cause canting, translation, and yawing of the maxilla and/or mandible. Each facial asymmetry type represents dentoalveolar compensations in three dimensions that correspond to the main skeletal discrepancies. To obtain sufficient surgical correction, it is necessary to analyze the main skeletal discrepancies contributing to the facial asymmetry and then the skeletal-dental relationships in the maxilla and mandible separately. Particularly in cases of facial asymmetry accompanied by mandibular yawing, it is not simple to establish pre-surgical goals of tooth movement since chin deviation and posterior gonial prominence can be either aggravated or compromised according to the direction of mandibular yawing. Thus, strategic dentoalveolar decompensations targeting the real basal skeletal discrepancies should be performed during presurgical orthodontic treatment to allow for sufficient skeletal correction with stability. In this report, we document targeted decompensation of two asymmetry patients focusing on more complicated yaw-dependent types than others: Y-type and A-type. This may suggest a clinical guideline on the targeted decompensation in patient with different types of facial asymmetries.

  15. Affective Body Movements (for Robots) Across Cultures

    DEFF Research Database (Denmark)

    Rehm, Matthias

    2018-01-01

    Humans are very good in expressing and interpreting emotions from a variety of different sources like voice, facial expression, or body movements. In this article, we concentrate on body movements and show that those are not only a source of affective information but might also have a different i...... with a study on creating an affective knocking movement for a humanoid robot and give details about a co-creation experiment for collecting a cross-cultural database on affective body movements and about the probabilistic model derived from this data....... interpretation in different cultures. To cope with these multiple viewpoints in generating and interpreting body movements in robots, we suggest a methodological approach that takes the cultural background of the developer and the user into account during the development process. We exemplify this approach...

  16. Multiracial Facial Golden Ratio and Evaluation of Facial Appearance.

    Science.gov (United States)

    Alam, Mohammad Khursheed; Mohd Noor, Nor Farid; Basri, Rehana; Yew, Tan Fo; Wen, Tay Hui

    2015-01-01

    This study aimed to investigate the association of facial proportion and its relation to the golden ratio with the evaluation of facial appearance among Malaysian population. This was a cross-sectional study with 286 randomly selected from Universiti Sains Malaysia (USM) Health Campus students (150 females and 136 males; 100 Malaysian Chinese, 100 Malaysian Malay and 86 Malaysian Indian), with the mean age of 21.54 ± 1.56 (Age range, 18-25). Facial indices obtained from direct facial measurements were used for the classification of facial shape into short, ideal and long. A validated structured questionnaire was used to assess subjects' evaluation of their own facial appearance. The mean facial indices of Malaysian Indian (MI), Malaysian Chinese (MC) and Malaysian Malay (MM) were 1.59 ± 0.19, 1.57 ± 0.25 and 1.54 ± 0.23 respectively. Only MC showed significant sexual dimorphism in facial index (P = 0.047; Pmean score of 2.18 ± 0.97 for overall impression and 2.15 ± 1.04 for facial parts, compared to MM and MI, with mean score of 1.80 ± 0.97 and 1.64 ± 0.74 respectively for overall impression; 1.75 ± 0.95 and 1.70 ± 0.83 respectively for facial parts. 1) Only 17.1% of Malaysian facial proportion conformed to the golden ratio, with majority of the population having short face (54.5%); 2) Facial index did not depend significantly on races; 3) Significant sexual dimorphism was shown among Malaysian Chinese; 4) All three races are generally satisfied with their own facial appearance; 5) No significant association was found between golden ratio and facial evaluation score among Malaysian population.

  17. Spatiotemporal Analysis of RGB-D-T Facial Images for Multimodal Pain Level Recognition

    DEFF Research Database (Denmark)

    Irani, Ramin; Nasrollahi, Kamal; Oliu Simon, Marc

    2015-01-01

    facial images for pain detection and pain intensity level recognition. For this purpose, we extract energies released by facial pixels using a spatiotemporal filter. Experiments on a group of 12 elderly people applying the multimodal approach show that the proposed method successfully detects pain...

  18. A successful double-layer facial nerve repair: A case presentation

    Directory of Open Access Journals (Sweden)

    Mehmet Dadaci

    2015-04-01

    Full Text Available The best method to repair the facial nerve is to perform the primary repair soon after the injury, without any tension in the nerve ends. We present a case of patient who had a full-thickness facial nerve cut at two different levels. The patient underwent primary repair, recovered almost completely in the fourth postoperative month, and had full movement in mimic muscles. Despite lower success rates in double-level cuts, performing appropriate primary repair at an appropriate time can reverse functional losses at early stages, and lead to recovery without any complications. [Hand Microsurg 2015; 4(1.000: 24-27

  19. Real-time movement detection and analysis for video surveillance applications

    Science.gov (United States)

    Hueber, Nicolas; Hennequin, Christophe; Raymond, Pierre; Moeglin, Jean-Pierre

    2014-06-01

    Pedestrian movement along critical infrastructures like pipes, railways or highways, is of major interest in surveillance applications as well as its behavior in urban environment. The goal is to anticipate illicit or dangerous human activities. For this purpose, we propose an all-in-one small autonomous system which delivers high level statistics and reports alerts in specific cases. This situational awareness project leads us to manage efficiently the scene by performing movement analysis. A dynamic background extraction algorithm is developed to reach the degree of robustness against natural and urban environment perturbations and also to match the embedded implementation constraints. When changes are detected in the scene, specific patterns are applied to detect and highlight relevant movements. Depending on the applications, specific descriptors can be extracted and fused in order to reach a high level of interpretation. In this paper, our approach is applied to two operational use cases: pedestrian urban statistics and railway surveillance. In the first case, a grid of prototypes is deployed over a city centre to collect pedestrian movement statistics up to a macroscopic level of analysis. The results demonstrate the relevance of the delivered information; in particular, the flow density map highlights pedestrian preferential paths along the streets. In the second case, one prototype is set next to high speed train tracks to secure the area. The results exhibit a low false alarm rate and assess our approach of a large sensor network for delivering a precise operational picture without overwhelming a supervisor.

  20. A Robust Shape Reconstruction Method for Facial Feature Point Detection

    Directory of Open Access Journals (Sweden)

    Shuqiu Tan

    2017-01-01

    Full Text Available Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.

  1. Restoration of orbicularis oculi muscle function in rabbits with peripheral facial paralysis via an implantable artificial facial nerve system.

    Science.gov (United States)

    Sun, Yajing; Jin, Cheng; Li, Keyong; Zhang, Qunfeng; Geng, Liang; Liu, Xundao; Zhang, Yi

    2017-12-01

    The purpose of the present study was to restore orbicularis oculi muscle function using the implantable artificial facial nerve system (IAFNS). The in vivo part of the IAFNS was implanted into 12 rabbits that were facially paralyzed on the right side of the face to restore the function of the orbicularis oculi muscle, which was indicated by closure of the paralyzed eye when the contralateral side was closed. Wireless communication links were established between the in vivo part (the processing chip and microelectrode) and the external part (System Controller program) of the system, which were used to set the working parameters and indicate the working state of the processing chip and microelectrode implanted in the body. A disturbance field strength test of the IAFNS processing chip was performed in a magnetic field dark room to test its electromagnetic radiation safety. Test distances investigated were 0, 1, 3 and 10 m, and levels of radiation intensity were evaluated in the horizontal and vertical planes. Anti-interference experiments were performed to test the stability of the processing chip under the interference of electromagnetic radiation. The fully implanted IAFNS was run for 5 h per day for 30 consecutive days to evaluate the accuracy and precision as well as the long-term stability and effectiveness of wireless communication. The stimulus intensity (range, 0-8 mA) was set every 3 days to confirm the minimum stimulation intensity which could indicate the movement of the paralyzed side was set. Effective stimulation rate was also tested by comparing the number of eye-close movements on both sides. The results of the present study indicated that the IAFNS could rebuild the reflex arc, inducing the experimental rabbits to close the eye of the paralyzed side. The System Controller program was able to reflect the in vivo part of the artificial facial nerve system in real-time and adjust the working pattern, stimulation intensity and frequency, range of wave

  2. "You Should Have Seen the Look on Your Face…": Self-awareness of Facial Expressions.

    Science.gov (United States)

    Qu, Fangbing; Yan, Wen-Jing; Chen, Yu-Hsin; Li, Kaiyun; Zhang, Hui; Fu, Xiaolan

    2017-01-01

    The awareness of facial expressions allows one to better understand, predict, and regulate his/her states to adapt to different social situations. The present research investigated individuals' awareness of their own facial expressions and the influence of the duration and intensity of expressions in two self-reference modalities, a real-time condition and a video-review condition. The participants were instructed to respond as soon as they became aware of any facial movements. The results revealed that awareness rates were 57.79% in the real-time condition and 75.92% in the video-review condition. The awareness rate was influenced by the intensity and (or) the duration. The intensity thresholds for individuals to become aware of their own facial expressions were calculated using logistic regression models. The results of Generalized Estimating Equations (GEE) revealed that video-review awareness was a significant predictor of real-time awareness. These findings extend understandings of human facial expression self-awareness in two modalities.

  3. Traumatic facial nerve neuroma with facial palsy presenting in infancy.

    Science.gov (United States)

    Clark, James H; Burger, Peter C; Boahene, Derek Kofi; Niparko, John K

    2010-07-01

    To describe the management of traumatic neuroma of the facial nerve in a child and literature review. Sixteen-month-old male subject. Radiological imaging and surgery. Facial nerve function. The patient presented at 16 months with a right facial palsy and was found to have a right facial nerve traumatic neuroma. A transmastoid, middle fossa resection of the right facial nerve lesion was undertaken with a successful facial nerve-to-hypoglossal nerve anastomosis. The facial palsy improved postoperatively. A traumatic neuroma should be considered in an infant who presents with facial palsy, even in the absence of an obvious history of trauma. The treatment of such lesion is complex in any age group but especially in young children. Symptoms, age, lesion size, growth rate, and facial nerve function determine the appropriate management.

  4. Dynamic facial expressions of emotion transmit an evolving hierarchy of signals over time.

    Science.gov (United States)

    Jack, Rachael E; Garrod, Oliver G B; Schyns, Philippe G

    2014-01-20

    Designed by biological and social evolutionary pressures, facial expressions of emotion comprise specific facial movements to support a near-optimal system of signaling and decoding. Although highly dynamical, little is known about the form and function of facial expression temporal dynamics. Do facial expressions transmit diagnostic signals simultaneously to optimize categorization of the six classic emotions, or sequentially to support a more complex communication system of successive categorizations over time? Our data support the latter. Using a combination of perceptual expectation modeling, information theory, and Bayesian classifiers, we show that dynamic facial expressions of emotion transmit an evolving hierarchy of "biologically basic to socially specific" information over time. Early in the signaling dynamics, facial expressions systematically transmit few, biologically rooted face signals supporting the categorization of fewer elementary categories (e.g., approach/avoidance). Later transmissions comprise more complex signals that support categorization of a larger number of socially specific categories (i.e., the six classic emotions). Here, we show that dynamic facial expressions of emotion provide a sophisticated signaling system, questioning the widely accepted notion that emotion communication is comprised of six basic (i.e., psychologically irreducible) categories, and instead suggesting four. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Perception of facial profile attractiveness by a Saudi sample

    International Nuclear Information System (INIS)

    Talic, Nabeel; Alshakhs, Mohammad S.

    2008-01-01

    Previous studies have reported different levels of perception of attractiveness among different ethnicities and among varying education-level groups on facial profile rating.To study the perception of facial profile attractiveness among Saudi dentists and lay-individuals. Digital facial profile images with altered degree of prognathism and retrognathism were presented to a sample of 60 Saudi dentists and 60 lay-persons with equal gender distribution. High reliability of repeated assessment of profile images was detected (ICC=0.982). Significant difference in perception of facial profile was found between genders (P<0.05) and among the groups with different education backgrounds (P<0.001). General agreement was established in both sample groups on average facial profile to be the most attractive and on the most retrognathic profile to be the least attractive. (author)

  6. iFER: facial expression recognition using automatically selected geometric eye and eyebrow features

    Science.gov (United States)

    Oztel, Ismail; Yolcu, Gozde; Oz, Cemil; Kazan, Serap; Bunyak, Filiz

    2018-03-01

    Facial expressions have an important role in interpersonal communications and estimation of emotional states or intentions. Automatic recognition of facial expressions has led to many practical applications and became one of the important topics in computer vision. We present a facial expression recognition system that relies on geometry-based features extracted from eye and eyebrow regions of the face. The proposed system detects keypoints on frontal face images and forms a feature set using geometric relationships among groups of detected keypoints. Obtained feature set is refined and reduced using the sequential forward selection (SFS) algorithm and fed to a support vector machine classifier to recognize five facial expression classes. The proposed system, iFER (eye-eyebrow only facial expression recognition), is robust to lower face occlusions that may be caused by beards, mustaches, scarves, etc. and lower face motion during speech production. Preliminary experiments on benchmark datasets produced promising results outperforming previous facial expression recognition studies using partial face features, and comparable results to studies using whole face information, only slightly lower by ˜ 2.5 % compared to the best whole face facial recognition system while using only ˜ 1 / 3 of the facial region.

  7. Nonexplicit change detection in complex dynamic settings: what eye movements reveal.

    Science.gov (United States)

    Vachon, François; Vallières, Benoît R; Jones, Dylan M; Tremblay, Sébastien

    2012-12-01

    We employed a computer-controlled command-and-control (C2) simulation and recorded eye movements to examine the extent and nature of the inability to detect critical changes in dynamic displays when change detection is implicit (i.e., requires no explicit report) to the operator's task. Change blindness-the failure to notice significant changes to a visual scene-may have dire consequences on performance in C2 and surveillance operations. Participants performed a radar-based risk-assessment task involving multiple subtasks. Although participants were not required to explicitly report critical changes to the operational display, change detection was critical in informing decision making. Participants' eye movements were used as an index of visual attention across the display. Nonfixated (i.e., unattended) changes were more likely to be missed than were fixated (i.e., attended) changes, supporting the idea that focused attention is necessary for conscious change detection. The finding of significant pupil dilation for changes undetected but fixated suggests that attended changes can nonetheless be missed because of a failure of attentional processes. Change blindness in complex dynamic displays takes the form of failures in establishing task-appropriate patterns of attentional allocation. These findings have implications in the design of change-detection support tools for dynamic displays and work procedure in C2 and surveillance.

  8. Nuclear magnetic resonance imaging in a case of facial myokymia with multiple sclerosis

    International Nuclear Information System (INIS)

    Kojima, Shigeyuki; Yagishita, Toshiyuki; Kita, Kohei; Hirayama, Keizo; Ikehira, Hiroo; Fukuda, Nobuo; Tateno, Yukio.

    1985-01-01

    A 59-year-old female of facial myokymia with multiple sclerosis was reported. In this case, facial myokymia appeared at the same time as the first attack of multiple sclerosis, in association with paroxysmal pain and desesthesia of the neck, painful tonic seizures of the right upper and lower extremities and cervical transverse myelopathy. The facial myokymia consisted of grossly visible, continuous, fine and worm-like movement, which often began in the area of the left orbicularis oculi and spread to the other facial muscles on one side. Electromyographic studies revealed grouping of motor units and continuous spontaneous rhythmic discharges in the left orbicularis oris suggesting facial myokymia, but there were no abnormalities on voluntary contraction. Sometimes doublet or multiplet patterns occurred while at other times the bursts were of single motor potential. The respective frequencies were 3-4/sec and 40-50/sec. There was no evidence of fibrillation. The facial myokymia disappeared after 4-8 weeks of administration of prednisolone and did not recur. In the remission stage after disappearance of the facial myokymia, nuclear magnetic resonance (NMR) imaging by the inversion recovery method demonstrated low intensity demyelinated plaque in the left lateral tegmentum of the inferior pons, which was responsible for the facial myokymia, but X-ray computed tomography revealed no pathological findings. The demyelinated plaque demonstrated by NMR imaging seemed to be located in the infranuclear area of the facial nerve nucleus and to involve the intramedurally root. (J.P.N.)

  9. Nuclear magnetic resonance imaging in a case of facial myokymia with multiple sclerosis

    Energy Technology Data Exchange (ETDEWEB)

    Kojima, Shigeyuki; Yagishita, Toshiyuki; Kita, Kohei; Hirayama, Keizo; Ikehira, Hiroo; Fukuda, Nobuo; Tateno, Yukio

    1985-06-01

    A 59-year-old female of facial myokymia with multiple sclerosis was reported. In this case, facial myokymia appeared at the same time as the first attack of multiple sclerosis, in association with paroxysmal pain and desesthesia of the neck, painful tonic seizures of the right upper and lower extremities and cervical transverse myelopathy. The facial myokymia consisted of grossly visible, continuous, fine and worm-like movement, which often began in the area of the left orbicularis oculi and spread to the other facial muscles on one side. Electromyographic studies revealed grouping of motor units and continuous spontaneous rhythmic discharges in the left orbicularis oris suggesting facial myokymia, but there were no abnormalities on voluntary contraction. Sometimes doublet or multiplet patterns occurred while at other times the bursts were of single motor potential. The respective frequencies were 3-4/sec and 40-50/sec. There was no evidence of fibrillation. The facial myokymia disappeared after 4-8 weeks of administration of prednisolone and did not recur. In the remission stage after disappearance of the facial myokymia, nuclear magnetic resonance (NMR) imaging by the inversion recovery method demonstrated low intensity demyelinated plaque in the left lateral segmentum of the inferior pons, which was responsible for the facial myokymia, but X-ray computed tomography revealed no pathological findings. The demyelinated plaque demonstrated by NMR imaging seemed to be located in the infranuclear area of the facial nerve nucleus and to involve the intramedurally root.

  10. Colesteatoma causando paralisia facial Cholesteatoma causing facial paralysis

    Directory of Open Access Journals (Sweden)

    José Ricardo Gurgel Testa

    2003-10-01

    Full Text Available A paralisia facial causada pelo colesteatoma é pouco freqüente. As porções do nervo mais acometidas são a timpânica e a região do 2º joelho. Nos casos de disseminação da lesão colesteatomatosa para o epitímpano anterior, o gânglio geniculado é o segmento do nervo facial mais sujeito à injúria. A etiopatogenia pode estar ligada à compressão do nervo pelo colesteatoma seguida de diminuição do seu suprimento vascular como também pela possível ação de substâncias neurotóxicas produzidas pela matriz do tumor ou pelas bactérias nele contidas. OBJETIVO: Avaliar a incidência, as características clínicas e o tratamento da paralisia facial decorrente da lesão colesteatomatosa. FORMA DE ESTUDO: Clínico retrospectivo. MATERIAL E MÉTODO: Estudo retrospectivo envolvendo dez casos de paralisia facial por colesteatoma selecionados através de levantamento de 206 descompressões do nervo facial com diferentes etiologias, realizadas na UNIFESP-EPM nos últimos dez anos. RESULTADOS: A incidência de paralisia facial por colesteatoma neste estudo foi de 4,85%,com predominância do sexo feminino (60%. A idade média dos pacientes foi de 39 anos. A duração e o grau da paralisia (inicial juntamente com a extensão da lesão foram importantes em relação à recuperação funcional do nervo facial. CONCLUSÃO: O tratamento cirúrgico precoce é fundamental para que ocorra um resultado funcional mais adequado. Nos casos de ruptura ou intensa fibrose do tecido nervoso, o enxerto de nervo (auricular magno/sural e/ou a anastomose hipoglosso-facial podem ser sugeridas.Facial paralysis caused by cholesteatoma is uncommon. The portions most frequently involved are horizontal (tympanic and second genu segments. When cholesteatomas extend over the anterior epitympanic space, the facial nerve is placed in jeopardy in the region of the geniculate ganglion. The aetiology can be related to compression of the nerve followed by impairment of its

  11. Are event-related potentials to dynamic facial expressions of emotion related to individual differences in the accuracy of processing facial expressions and identity?

    Science.gov (United States)

    Recio, Guillermo; Wilhelm, Oliver; Sommer, Werner; Hildebrandt, Andrea

    2017-04-01

    Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain-behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = -.51) and memory (r = -.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.

  12. Importance of the brow in facial expressiveness during human communication.

    Science.gov (United States)

    Neely, John Gail; Lisker, Paul; Drapekin, Jesse

    2014-03-01

    The objective of this study was to evaluate laterality and upper/lower face dominance of expressiveness during prescribed speech using a unique validated image subtraction system capable of sensitive and reliable measurement of facial surface deformation. Observations and experiments of central control of facial expressions during speech and social utterances in humans and animals suggest that the right mouth moves more than the left during nonemotional speech. However, proficient lip readers seem to attend to the whole face to interpret meaning from expressed facial cues, also implicating a horizontal (upper face-lower face) axis. Prospective experimental design. Experimental maneuver: recited speech. image-subtraction strength-duration curve amplitude. Thirty normal human adults were evaluated during memorized nonemotional recitation of 2 short sentences. Facial movements were assessed using a video-image subtractions system capable of simultaneously measuring upper and lower specific areas of each hemiface. The results demonstrate both axes influence facial expressiveness in human communication; however, the horizontal axis (upper versus lower face) would appear dominant, especially during what would appear to be spontaneous breakthrough unplanned expressiveness. These data are congruent with the concept that the left cerebral hemisphere has control over nonemotionally stimulated speech; however, the multisynaptic brainstem extrapyramidal pathways may override hemiface laterality and preferentially take control of the upper face. Additionally, these data demonstrate the importance of the often-ignored brow in facial expressiveness. Experimental study. EBM levels not applicable.

  13. Structural and temporal requirements of Wnt/PCP protein Vangl2 function for convergence and extension movements and facial branchiomotor neuron migration in zebrafish.

    Science.gov (United States)

    Pan, Xiufang; Sittaramane, Vinoth; Gurung, Suman; Chandrasekhar, Anand

    2014-02-01

    Van gogh-like 2 (Vangl2), a core component of the Wnt/planar cell polarity (PCP) signaling pathway, is a four-pass transmembrane protein with N-terminal and C-terminal domains located in the cytosol, and is structurally conserved from flies to mammals. In vertebrates, Vangl2 plays an essential role in convergence and extension (CE) movements during gastrulation and in facial branchiomotor (FBM) neuron migration in the hindbrain. However, the roles of specific Vangl2 domains, of membrane association, and of specific extracellular and intracellular motifs have not been examined, especially in the context of FBM neuron migration. Through heat shock-inducible expression of various Vangl2 transgenes, we found that membrane associated functions of the N-terminal and C-terminal domains of Vangl2 are involved in regulating FBM neuron migration. Importantly, through temperature shift experiments, we found that the critical period for Vangl2 function coincides with the initial stages of FBM neuron migration out of rhombomere 4. Intriguingly, we have also uncovered a putative nuclear localization motif in the C-terminal domain that may play a role in regulating CE movements. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  14. What does magnetic resonance imaging add to the prenatal ultrasound diagnosis of facial clefts?

    Science.gov (United States)

    Mailáth-Pokorny, M; Worda, C; Krampl-Bettelheim, E; Watzinger, F; Brugger, P C; Prayer, D

    2010-10-01

    Ultrasound is the modality of choice for prenatal detection of cleft lip and palate. Because its accuracy in detecting facial clefts, especially isolated clefts of the secondary palate, can be limited, magnetic resonance imaging (MRI) is used as an additional method for assessing the fetus. The aim of this study was to investigate the role of fetal MRI in the prenatal diagnosis of facial clefts. Thirty-four pregnant women with a mean gestational age of 26 (range, 19-34) weeks underwent in utero MRI, after ultrasound examination had identified either a facial cleft (n = 29) or another suspected malformation (micrognathia (n = 1), cardiac defect (n = 1), brain anomaly (n = 2) or diaphragmatic hernia (n = 1)). The facial cleft was classified postnatally and the diagnoses were compared with the previous ultrasound findings. There were 11 (32.4%) cases with cleft of the primary palate alone, 20 (58.8%) clefts of the primary and secondary palate and three (8.8%) isolated clefts of the secondary palate. In all cases the primary and secondary palate were visualized successfully with MRI. Ultrasound imaging could not detect five (14.7%) facial clefts and misclassified 15 (44.1%) facial clefts. The MRI classification correlated with the postnatal/postmortem diagnosis. In our hands MRI allows detailed prenatal evaluation of the primary and secondary palate. By demonstrating involvement of the palate, MRI provides better detection and classification of facial clefts than does ultrasound alone. Copyright © 2010 ISUOG. Published by John Wiley & Sons, Ltd.

  15. Effects of Face and Background Color on Facial Expression Perception

    Directory of Open Access Journals (Sweden)

    Tetsuto Minami

    2018-06-01

    Full Text Available Detecting others’ emotional states from their faces is an essential component of successful social interaction. However, the ability to perceive emotional expressions is reported to be modulated by a number of factors. We have previously found that facial color modulates the judgment of facial expression, while another study has shown that background color plays a modulatory role. Therefore, in this study, we directly compared the effects of face and background color on facial expression judgment within a single experiment. Fear-to-anger morphed faces were presented in face and background color conditions. Our results showed that judgments of facial expressions was influenced by both face and background color. However, facial color effects were significantly greater than background color effects, although the color saturation of faces was lower compared to background colors. These results suggest that facial color is intimately related to the judgment of facial expression, over and above the influence of simple color.

  16. When the bell tolls on Bell's palsy: finding occult malignancy in acute-onset facial paralysis.

    Science.gov (United States)

    Quesnel, Alicia M; Lindsay, Robin W; Hadlock, Tessa A

    2010-01-01

    This study reports 4 cases of occult parotid malignancy presenting with sudden-onset facial paralysis to demonstrate that failure to regain tone 6 months after onset distinguishes these patients from Bell's palsy patients with delayed recovery and to propose a diagnostic algorithm for this subset of patients. A case series of 4 patients with occult parotid malignancies presenting with acute-onset unilateral facial paralysis is reported. Initial imaging on all 4 patients did not demonstrate a parotid mass. Diagnostic delays ranged from 7 to 36 months from time of onset of facial paralysis to time of diagnosis of parotid malignancy. Additional physical examination findings, especially failure to regain tone, as well as properly protocolled radiologic studies reviewed with dedicated head and neck radiologists, were helpful in arriving at the diagnosis. An algorithm to minimize diagnostic delays in this subset of acute facial paralysis patients is presented. Careful attention to facial tone, in addition to movement, is important in the diagnostic evaluation of acute-onset facial paralysis. Copyright 2010 Elsevier Inc. All rights reserved.

  17. Movement and respiration detection using statistical properties of the FMCW radar signal

    KAUST Repository

    Kiuru, Tero

    2016-07-26

    This paper presents a 24 GHz FMCW radar system for detection of movement and respiration using change in the statistical properties of the received radar signal, both amplitude and phase. We present the hardware and software segments of the radar system as well as algorithms with measurement results for two distinct use-cases: 1. FMCW radar as a respiration monitor and 2. a dual-use of the same radar system for smart lighting and intrusion detection. By using change in statistical properties of the signal for detection, several system parameters can be relaxed, including, for example, pulse repetition rate, power consumption, computational load, processor speed, and memory space. We will also demonstrate, that the capability to switch between received signal strength and phase difference enables dual-use cases with one requiring extreme sensitivity to movement and the other robustness against small sources of interference. © 2016 IEEE.

  18. Acromegaly determination using discriminant analysis of the three-dimensional facial classification in Taiwanese.

    Science.gov (United States)

    Wang, Ming-Hsu; Lin, Jen-Der; Chang, Chen-Nen; Chiou, Wen-Ko

    2017-08-01

    The aim of this study was to assess the size, angles and positional characteristics of facial anthropometry between "acromegalic" patients and control subjects. We also identify possible facial soft tissue measurements for generating discriminant functions toward acromegaly determination in males and females for acromegaly early self-awareness. This is a cross-sectional study. Subjects participating in this study included 70 patients diagnosed with acromegaly (35 females and 35 males) and 140 gender-matched control individuals. Three-dimensional facial images were collected via a camera system. Thirteen landmarks were selected. Eleven measurements from the three categories were selected and applied, including five frontal widths, three lateral depths and three lateral angular measurements. Descriptive analyses were conducted using means and standard deviations for each measurement. Univariate and multivariate discriminant function analyses were applied in order to calculate the accuracy of acromegaly detection. Patients with acromegaly exhibit soft-tissue facial enlargement and hypertrophy. Frontal widths as well as lateral depth and angle of facial changes were evident. The average accuracies of all functions for female patient detection ranged from 80.0-91.40%. The average accuracies of all functions for male patient detection were from 81.0-94.30%. The greatest anomaly observed was evidenced in the lateral angles, with greater enlargement of "nasofrontal" angles for females and greater "mentolabial" angles for males. Additionally, shapes of the lateral angles showed changes. The majority of the facial measurements proved dynamic for acromegaly patients; however, it is problematic to detect the disease with progressive body anthropometric changes. The discriminant functions of detection developed in this study could help patients, their families, medical practitioners and others to identify and track progressive facial change patterns before the possible patients

  19. Automatic facial animation parameters extraction in MPEG-4 visual communication

    Science.gov (United States)

    Yang, Chenggen; Gong, Wanwei; Yu, Lu

    2002-01-01

    Facial Animation Parameters (FAPs) are defined in MPEG-4 to animate a facial object. The algorithm proposed in this paper to extract these FAPs is applied to very low bit-rate video communication, in which the scene is composed of a head-and-shoulder object with complex background. This paper addresses the algorithm to automatically extract all FAPs needed to animate a generic facial model, estimate the 3D motion of head by points. The proposed algorithm extracts human facial region by color segmentation and intra-frame and inter-frame edge detection. Facial structure and edge distribution of facial feature such as vertical and horizontal gradient histograms are used to locate the facial feature region. Parabola and circle deformable templates are employed to fit facial feature and extract a part of FAPs. A special data structure is proposed to describe deformable templates to reduce time consumption for computing energy functions. Another part of FAPs, 3D rigid head motion vectors, are estimated by corresponding-points method. A 3D head wire-frame model provides facial semantic information for selection of proper corresponding points, which helps to increase accuracy of 3D rigid object motion estimation.

  20. The role of great auricular-facial nerve neurorrhaphy in facial nerve damage.

    Science.gov (United States)

    Sun, Yan; Liu, Limei; Han, Yuechen; Xu, Lei; Zhang, Daogong; Wang, Haibo

    2015-01-01

    Facial nerve is easy to be damaged, and there are many reconstructive methods for facial nerve reconstructive, such as facial nerve end to end anastomosis, the great auricular nerve graft, the sural nerve graft, or hypoglossal-facial nerve anastomosis. However, there is still little study about great auricular-facial nerve neurorrhaphy. The aim of the present study was to identify the role of great auricular-facial nerve neurorrhaphy and the mechanism. Rat models of facial nerve cut (FC), facial nerve end to end anastomosis (FF), facial-great auricular neurorrhaphy (FG), and control (Ctrl) were established. Apex nasi amesiality observation, electrophysiology and immunofluorescence assays were employed to investigate the function and mechanism. In apex nasi amesiality observation, it was found apex nasi amesiality of FG group was partly recovered. Additionally, electrophysiology and immunofluorescence assays revealed that facial-great auricular neurorrhaphy could transfer nerve impulse and express AChR which was better than facial nerve cut and worse than facial nerve end to end anastomosis. The present study indicated that great auricular-facial nerve neurorrhaphy is a substantial solution for facial lesion repair, as it is efficiently preventing facial muscles atrophy by generating neurotransmitter like ACh.

  1. Orientations for the successful categorization of facial expressions and their link with facial features.

    Science.gov (United States)

    Duncan, Justin; Gosselin, Frédéric; Cobarro, Charlène; Dugas, Gabrielle; Blais, Caroline; Fiset, Daniel

    2017-12-01

    Horizontal information was recently suggested to be crucial for face identification. In the present paper, we expand on this finding and investigate the role of orientations for all the basic facial expressions and neutrality. To this end, we developed orientation bubbles to quantify utilization of the orientation spectrum by the visual system in a facial expression categorization task. We first validated the procedure in Experiment 1 with a simple plaid-detection task. In Experiment 2, we used orientation bubbles to reveal the diagnostic-i.e., task relevant-orientations for the basic facial expressions and neutrality. Overall, we found that horizontal information was highly diagnostic for expressions-surprise excepted. We also found that utilization of horizontal information strongly predicted performance level in this task. Despite the recent surge of research on horizontals, the link with local features remains unexplored. We were thus also interested in investigating this link. In Experiment 3, location bubbles were used to reveal the diagnostic features for the basic facial expressions. Crucially, Experiments 2 and 3 were run in parallel on the same participants, in an interleaved fashion. This way, we were able to correlate individual orientation and local diagnostic profiles. Our results indicate that individual differences in horizontal tuning are best predicted by utilization of the eyes.

  2. An improved algorithm for automatic detection of saccades in eye movement data and for calculating saccade parameters.

    Science.gov (United States)

    Behrens, F; Mackeben, M; Schröder-Preikschat, W

    2010-08-01

    This analysis of time series of eye movements is a saccade-detection algorithm that is based on an earlier algorithm. It achieves substantial improvements by using an adaptive-threshold model instead of fixed thresholds and using the eye-movement acceleration signal. This has four advantages: (1) Adaptive thresholds are calculated automatically from the preceding acceleration data for detecting the beginning of a saccade, and thresholds are modified during the saccade. (2) The monotonicity of the position signal during the saccade, together with the acceleration with respect to the thresholds, is used to reliably determine the end of the saccade. (3) This allows differentiation between saccades following the main-sequence and non-main-sequence saccades. (4) Artifacts of various kinds can be detected and eliminated. The algorithm is demonstrated by applying it to human eye movement data (obtained by EOG) recorded during driving a car. A second demonstration of the algorithm detects microsleep episodes in eye movement data.

  3. Clinical predictors of facial nerve outcome after translabyrinthine resection of acoustic neuromas.

    Science.gov (United States)

    Shamji, Mohammed F; Schramm, David R; Benoit, Brien G

    2007-01-01

    The translabyrinthine approach to acoustic neuroma resection offers excellent exposure for facial nerve dissection with 95% preservation of anatomic continuity. Acceptable outcome in facial asymptomatic patients is reported at 64-90%, but transient postoperative deterioration often occurs. The objective of this study was to identify preoperative clinical presentation and intraoperative surgical findings that predispose patients to facial nerve dysfunction after acoustic neuroma surgery. The charts of 128 consecutive translabyrinthine patients were examined retrospectively to identify new clinical and intraoperative predictors of facial nerve outcome. Postoperative evaluation of patients to normal function or mild asymmetry upon close inspection (House-Brackmann grades of I or II) was defined as an acceptable outcome, with obvious asymmetry to no movement (grades III to VI) defined as unacceptable. Intraoperative nerve stimulation was performed in all cases, and clinical grading was performed by a single neurosurgeon in all cases. Among patients with no preoperative facial nerve deficit, 87% had an acceptable result. Small size (P mA (P< 0.01) were reaffirmed as predictive of functional nerve preservation. Additionally, preoperative tinnitus (P = 0.03), short duration of hearing loss (P< 0. 01), and lack of subjective tumour adherence to the facial nerve (P = 0.02) were independently correlated with positive outcome. Our experience with the translabyrinthine approach reveals the previously unestablished associations of facial nerve outcome to include presence of tinnitus and duration of hypoacusis. Independent predictors of tumour size and nerve stimulation thresholds were reaffirmed, and the subjective description of tumour adherence to the facial nerve making dissection more difficult appears to be important.

  4. MRI of the facial nerve in idiopathic facial palsy

    International Nuclear Information System (INIS)

    Saatci, I.; Sahintuerk, F.; Sennaroglu, L.; Boyvat, F.; Guersel, B.; Besim, A.

    1996-01-01

    The purpose of this prospective study was to define the enhancement pattern of the facial nerve in idiopathic facial paralysis (Bell's palsy) on magnetic resonance (MR) imaging with routine doses of gadolinium-DTPA (0.1 mmol/kg). Using 0.5 T imager, 24 patients were examined with a mean interval time of 13.7 days between the onset of symptoms and the MR examination. Contralateral asymptomatic facial nerves constituted the control group and five of the normal facial nerves (20.8%) showed enhancement confined to the geniculate ganglion. Hence, contrast enhancement limited to the geniculate ganglion in the abnormal facial nerve (3 of 24) was referred to a equivocal. Not encountered in any of the normal facial nerves, enhancement of other segments alone or associated with geniculate ganglion enhancement was considered to be abnormal and noted in 70.8% of the symptomatic facial nerves. The most frequently enhancing segments were the geniculate ganglion and the distal intracanalicular segment. (orig.)

  5. MRI of the facial nerve in idiopathic facial palsy

    Energy Technology Data Exchange (ETDEWEB)

    Saatci, I. [Dept. of Radiology, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Sahintuerk, F. [Dept. of Radiology, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Sennaroglu, L. [Dept. of Otolaryngology, Head and Neck Surgery, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Boyvat, F. [Dept. of Radiology, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Guersel, B. [Dept. of Otolaryngology, Head and Neck Surgery, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey); Besim, A. [Dept. of Radiology, Hacettepe Univ., Hospital Sihhiye, Ankara (Turkey)

    1996-10-01

    The purpose of this prospective study was to define the enhancement pattern of the facial nerve in idiopathic facial paralysis (Bell`s palsy) on magnetic resonance (MR) imaging with routine doses of gadolinium-DTPA (0.1 mmol/kg). Using 0.5 T imager, 24 patients were examined with a mean interval time of 13.7 days between the onset of symptoms and the MR examination. Contralateral asymptomatic facial nerves constituted the control group and five of the normal facial nerves (20.8%) showed enhancement confined to the geniculate ganglion. Hence, contrast enhancement limited to the geniculate ganglion in the abnormal facial nerve (3 of 24) was referred to a equivocal. Not encountered in any of the normal facial nerves, enhancement of other segments alone or associated with geniculate ganglion enhancement was considered to be abnormal and noted in 70.8% of the symptomatic facial nerves. The most frequently enhancing segments were the geniculate ganglion and the distal intracanalicular segment. (orig.)

  6. Facial paralysis

    Science.gov (United States)

    ... otherwise healthy, facial paralysis is often due to Bell palsy . This is a condition in which the facial ... speech, or occupational therapist. If facial paralysis from Bell palsy lasts for more than 6 to 12 months, ...

  7. Facial Sports Injuries

    Science.gov (United States)

    ... Marketplace Find an ENT Doctor Near You Facial Sports Injuries Facial Sports Injuries Patient Health Information News ... should receive immediate medical attention. Prevention Of Facial Sports Injuries The best way to treat facial sports ...

  8. Facial Cosmetic Surgery

    Science.gov (United States)

    ... to find out more. Facial Cosmetic Surgery Facial Cosmetic Surgery Extensive education and training in surgical procedures ... to find out more. Facial Cosmetic Surgery Facial Cosmetic Surgery Extensive education and training in surgical procedures ...

  9. Deep learning the dynamic appearance and shape of facial action units

    OpenAIRE

    Jaiswal, Shashank; Valstar, Michel F.

    2016-01-01

    Spontaneous facial expression recognition under uncontrolled conditions is a hard task. It depends on multiple factors including shape, appearance and dynamics of the facial features, all of which are adversely affected by environmental noise and low intensity signals typical of such conditions. In this work, we present a novel approach to Facial Action Unit detection using a combination of Convolutional and Bi-directional Long Short-Term Memory Neural Networks (CNN-BLSTM), which jointly lear...

  10. Nerve growth factor reduces apoptotic cell death in rat facial motor neurons after facial nerve injury.

    Science.gov (United States)

    Hui, Lian; Yuan, Jing; Ren, Zhong; Jiang, Xuejun

    2015-01-01

    To assess the effects of nerve growth factor (NGF) on motor neurons after induction of a facial nerve lesion, and to compare the effects of different routes of NGF injection on motor neuron survival. This study was carried out in the Department of Otolaryngology Head & Neck Surgery, China Medical University, Liaoning, China from October 2012 to March 2013. Male Wistar rats (n = 65) were randomly assigned into 4 groups: A) healthy controls; B) facial nerve lesion model + normal saline injection; C) facial nerve lesion model + NGF injection through the stylomastoid foramen; D) facial nerve lesion model + intraperitoneal injection of NGF. Apoptotic cell death was detected using the terminal deoxynucleotidyl transferase dUTP nick end-labeling assay. Expression of caspase-3 and p53 up-regulated modulator of apoptosis (PUMA) was determined by immunohistochemistry. Injection of NGF significantly reduced cell apoptosis, and also greatly decreased caspase-3 and PUMA expression in injured motor neurons. Group C exhibited better efficacy for preventing cellular apoptosis and decreasing caspase-3 and PUMA expression compared with group D (pfacial nerve injury in rats. The NGF injected through the stylomastoid foramen demonstrated better protective efficacy than when injected intraperitoneally.

  11. Comparison of hemihypoglossal-facial nerve transposition with a cross-facial nerve graft and muscle transplant for the rehabilitation of facial paralysis using the facial clima method.

    Science.gov (United States)

    Hontanilla, Bernardo; Vila, Antonio

    2012-02-01

    To compare quantitatively the results obtained after hemihypoglossal nerve transposition and microvascular gracilis transfer associated with a cross facial nerve graft (CFNG) for reanimation of a paralysed face, 66 patients underwent hemihypoglossal transposition (n = 25) or microvascular gracilis transfer and CFNG (n = 41). The commissural displacement (CD) and commissural contraction velocity (CCV) in the two groups were compared using the system known as Facial clima. There was no inter-group variability between the groups (p > 0.10) in either variable. However, intra-group variability was detected between the affected and healthy side in the transposition group (p = 0.036 and p = 0.017, respectively). The transfer group had greater symmetry in displacement of the commissure (CD) and commissural contraction velocity (CCV) than the transposition group and patients were more satisfied. However, the transposition group had correct symmetry at rest but more asymmetry of CCV and CD when smiling.

  12. Effect of a Facial Muscle Exercise Device on Facial Rejuvenation.

    Science.gov (United States)

    Hwang, Ui-Jae; Kwon, Oh-Yun; Jung, Sung-Hoon; Ahn, Sun-Hee; Gwak, Gyeong-Tae

    2018-01-20

    The efficacy of facial muscle exercises (FMEs) for facial rejuvenation is controversial. In the majority of previous studies, nonquantitative assessment tools were used to assess the benefits of FMEs. This study examined the effectiveness of FMEs using a Pao (MTG, Nagoya, Japan) device to quantify facial rejuvenation. Fifty females were asked to perform FMEs using a Pao device for 30 seconds twice a day for 8 weeks. Facial muscle thickness and cross-sectional area were measured sonographically. Facial surface distance, surface area, and volumes were determined using a laser scanning system before and after FME. Facial muscle thickness, cross-sectional area, midfacial surface distances, jawline surface distance, and lower facial surface area and volume were compared bilaterally before and after FME using a paired Student t test. The cross-sectional areas of the zygomaticus major and digastric muscles increased significantly (right: P jawline surface distances (right: P = 0.004, left: P = 0.003) decreased significantly after FME using the Pao device. The lower facial surface areas (right: P = 0.005, left: P = 0.006) and volumes (right: P = 0.001, left: P = 0.002) were also significantly reduced after FME using the Pao device. FME using the Pao device can increase facial muscle thickness and cross-sectional area, thus contributing to facial rejuvenation. © 2018 The American Society for Aesthetic Plastic Surgery, Inc.

  13. A case of neurofibromatosis developing facial paralysis following treatment with a gamma knife

    Energy Technology Data Exchange (ETDEWEB)

    Hosomi, Yoshikazu [Kobe Rosai Hospital (Japan)

    2002-12-01

    Neurofibromatosis is generally classified into types I and II: the latter may be life-threatening when the acoustic nerve tumor becomes enlarged. The author reports on a patient with bilateral acoustic nerve tumors, as well as large tumors at the neck and sacral regions, who developed facial nerve paralysis following surgery in which a gamma knife was used. The patient, a 30-year-old woman with no family history of neurofibromatosis, had a prominent neurofibroma at the pharyngeal region surgically removed when she was about 23. The procedure left her with dysfunctions of the vocal cords and lingual movements. At the age of 30 (March 2001), a tumor originating at S1 of the sacral nerve plexus was removed, which caused her leg movements to be restricted. Later, an acoustic nerve tumor was found to have enlarged; and in July 2001, the left acoustic nerve tumor was extirpated by using a gamma knife. Starting in early 2002, her left facial movements appeared to be compromised but during the follow-up observation period, she regained the movements. Patients with neurofibromatosis are often plagued by the development of multiple tumors and surgical sequelae. One is reminded that it is necessary to plan treatment with sufficient consideration given to quality of life (QOL) (including the problem of an acoustic nerve tumor that may develop in future) as well as individual patients wishes. (author)

  14. “You Should Have Seen the Look on Your Face…”: Self-awareness of Facial Expressions

    Science.gov (United States)

    Qu, Fangbing; Yan, Wen-Jing; Chen, Yu-Hsin; Li, Kaiyun; Zhang, Hui; Fu, Xiaolan

    2017-01-01

    The awareness of facial expressions allows one to better understand, predict, and regulate his/her states to adapt to different social situations. The present research investigated individuals’ awareness of their own facial expressions and the influence of the duration and intensity of expressions in two self-reference modalities, a real-time condition and a video-review condition. The participants were instructed to respond as soon as they became aware of any facial movements. The results revealed that awareness rates were 57.79% in the real-time condition and 75.92% in the video-review condition. The awareness rate was influenced by the intensity and (or) the duration. The intensity thresholds for individuals to become aware of their own facial expressions were calculated using logistic regression models. The results of Generalized Estimating Equations (GEE) revealed that video-review awareness was a significant predictor of real-time awareness. These findings extend understandings of human facial expression self-awareness in two modalities. PMID:28611703

  15. Research on facial expression simulation based on depth image

    Science.gov (United States)

    Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao

    2017-11-01

    Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.

  16. Facial Fractures.

    Science.gov (United States)

    Ricketts, Sophie; Gill, Hameet S; Fialkov, Jeffery A; Matic, Damir B; Antonyshyn, Oleh M

    2016-02-01

    After reading this article, the participant should be able to: 1. Demonstrate an understanding of some of the changes in aspects of facial fracture management. 2. Assess a patient presenting with facial fractures. 3. Understand indications and timing of surgery. 4. Recognize exposures of the craniomaxillofacial skeleton. 5. Identify methods for repair of typical facial fracture patterns. 6. Discuss the common complications seen with facial fractures. Restoration of the facial skeleton and associated soft tissues after trauma involves accurate clinical and radiologic assessment to effectively plan a management approach for these injuries. When surgical intervention is necessary, timing, exposure, sequencing, and execution of repair are all integral to achieving the best long-term outcomes for these patients.

  17. Effect of platelet rich plasma and fibrin sealant on facial nerve regeneration in a rat model.

    Science.gov (United States)

    Farrag, Tarik Y; Lehar, Mohamed; Verhaegen, Pauline; Carson, Kathryn A; Byrne, Patrick J

    2007-01-01

    To investigate the effects of platelet rich plasma (PRP) and fibrin sealant (FS) on facial nerve regeneration. Prospective, randomized, and controlled animal study. Experiments involved the transection and repair of facial nerve of 49 male adult rats. Seven groups were created dependant on the method of repair: suture; PRP (with/without suture); platelet poor plasma (PPP) (with/without suture); and FS (with/without suture) groups. Each method of repair was applied immediately after the nerve transection. The outcomes measured were: 1) observation of gross recovery of vibrissae movements within 8-week period after nerve transection and repair using a 5-point scale and comparing the left (test) side with the right (control) side; 2) comparisons of facial nerve motor action potentials (MAP) recorded before and 8 weeks after nerve transection and repair, including both the transected and control (untreated) nerves; 3) histologic evaluation of axons counts and the area of the axons. Vibrissae movement observation: the inclusion of suturing resulted in overall improved outcomes. This was found for comparisons of the suture group with PRP group; PRP with/without suture groups; and PPP with/without suture groups (P .05). The movement recovery of the suture group was significantly better than the FS group (P = .014). The recovery of function of the PRP groups was better than that of the FS groups, although this did not reach statistical significance (P = .09). Electrophysiologic testing: there was a significantly better performance of the suture group when compared with the PRP and PPP without suture groups in nerve conduction velocity (P facial nerve axotomy models occurred when the nerve ends were sutured together. At the same time, the data demonstrated a measurable neurotrophic effect when PRP was present, with the most favorable results seen with PRP added to suture. There was an improved functional outcome with the use of PRP in comparison with FS or no bioactive

  18. Mimicking emotions: how 3-12-month-old infants use the facial expressions and eyes of a model.

    Science.gov (United States)

    Soussignan, Robert; Dollion, Nicolas; Schaal, Benoist; Durand, Karine; Reissland, Nadja; Baudouin, Jean-Yves

    2018-06-01

    While there is an extensive literature on the tendency to mimic emotional expressions in adults, it is unclear how this skill emerges and develops over time. Specifically, it is unclear whether infants mimic discrete emotion-related facial actions, whether their facial displays are moderated by contextual cues and whether infants' emotional mimicry is constrained by developmental changes in the ability to discriminate emotions. We therefore investigate these questions using Baby-FACS to code infants' facial displays and eye-movement tracking to examine infants' looking times at facial expressions. Three-, 7-, and 12-month-old participants were exposed to dynamic facial expressions (joy, anger, fear, disgust, sadness) of a virtual model which either looked at the infant or had an averted gaze. Infants did not match emotion-specific facial actions shown by the model, but they produced valence-congruent facial responses to the distinct expressions. Furthermore, only the 7- and 12-month-olds displayed negative responses to the model's negative expressions and they looked more at areas of the face recruiting facial actions involved in specific expressions. Our results suggest that valence-congruent expressions emerge in infancy during a period where the decoding of facial expressions becomes increasingly sensitive to the social signal value of emotions.

  19. The Prevalence of Cosmetic Facial Plastic Procedures among Facial Plastic Surgeons.

    Science.gov (United States)

    Moayer, Roxana; Sand, Jordan P; Han, Albert; Nabili, Vishad; Keller, Gregory S

    2018-04-01

    This is the first study to report on the prevalence of cosmetic facial plastic surgery use among facial plastic surgeons. The aim of this study is to determine the frequency with which facial plastic surgeons have cosmetic procedures themselves. A secondary aim is to determine whether trends in usage of cosmetic facial procedures among facial plastic surgeons are similar to that of nonsurgeons. The study design was an anonymous, five-question, Internet survey distributed via email set in a single academic institution. Board-certified members of the American Academy of Facial Plastic and Reconstructive Surgery (AAFPRS) were included in this study. Self-reported history of cosmetic facial plastic surgery or minimally invasive procedures were recorded. The survey also queried participants for demographic data. A total of 216 members of the AAFPRS responded to the questionnaire. Ninety percent of respondents were male ( n  = 192) and 10.3% were female ( n  = 22). Thirty-three percent of respondents were aged 31 to 40 years ( n  = 70), 25% were aged 41 to 50 years ( n  = 53), 21.4% were aged 51 to 60 years ( n  = 46), and 20.5% were older than 60 years ( n  = 44). Thirty-six percent of respondents had a surgical cosmetic facial procedure and 75% has at least one minimally invasive cosmetic facial procedure. Facial plastic surgeons are frequent users of cosmetic facial plastic surgery. This finding may be due to access, knowledge base, values, or attitudes. By better understanding surgeon attitudes toward facial plastic surgery, we can improve communication with patients and delivery of care. This study is a first step in understanding use of facial plastic procedures among facial plastic surgeons. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  20. Automated Analysis of Facial Cues from Videos as a Potential Method for Differentiating Stress and Boredom of Players in Games

    Directory of Open Access Journals (Sweden)

    Fernando Bevilacqua

    2018-01-01

    Full Text Available Facial analysis is a promising approach to detect emotions of players unobtrusively; however approaches are commonly evaluated in contexts not related to games or facial cues are derived from models not designed for analysis of emotions during interactions with games. We present a method for automated analysis of facial cues from videos as a potential tool for detecting stress and boredom of players behaving naturally while playing games. Computer vision is used to automatically and unobtrusively extract 7 facial features aimed at detecting the activity of a set of facial muscles. Features are mainly based on the Euclidean distance of facial landmarks and do not rely on predefined facial expressions, training of a model, or the use of facial standards. An empirical evaluation was conducted on video recordings of an experiment involving games as emotion elicitation sources. Results show statistically significant differences in the values of facial features during boring and stressful periods of gameplay for 5 of the 7 features. We believe our approach is more user-tailored, convenient, and better suited for contexts involving games.

  1. The first facial expression recognition and analysis challenge

    NARCIS (Netherlands)

    Valstar, Michel F.; Jiang, Bihan; Mehu, Marc; Pantic, Maja; Scherer, Klaus

    Automatic Facial Expression Recognition and Analysis, in particular FACS Action Unit (AU) detection and discrete emotion detection, has been an active topic in computer science for over two decades. Standardisation and comparability has come some way; for instance, there exist a number of commonly

  2. Promising Technique for Facial Nerve Reconstruction in Extended Parotidectomy

    Directory of Open Access Journals (Sweden)

    Ithzel Maria Villarreal

    2015-11-01

    Full Text Available Introduction: Malignant tumors of the parotid gland account scarcely for 5% of all head and neck tumors. Most of these neoplasms have a high tendency for recurrence, local infiltration, perineural extension, and metastasis. Although uncommon, these malignant tumors require complex surgical treatment sometimes involving a total parotidectomy including a complete facial nerve resection. Severe functional and aesthetic facial defects are the result of a complete sacrifice or injury to isolated branches becoming an uncomfortable distress for patients and a major challenge for reconstructive surgeons.   Case Report: A case of a 54-year-old, systemically healthy male patient with a 4 month complaint of pain and swelling on the right side of the face is presented. The patient reported a rapid increase in the size of the lesion over the past 2 months. Imaging tests and histopathological analysis reported an adenoid cystic carcinoma. A complete parotidectomy was carried out with an intraoperative notice of facial nerve infiltration requiring a second intervention for nerve and defect reconstruction. A free ALT flap with vascularized nerve grafts was the surgical choice. A 6 month follow-up showed partial facial movement recovery and the facial defect mended.   Conclusion:  It is of critical importance to restore function to patients with facial nerve injury.  Vascularized nerve grafts, in many clinical and experimental studies, have shown to result in better nerve regeneration than conventional non-vascularized nerve grafts. Nevertheless, there are factors that may affect the degree, speed and regeneration rate regarding the free fasciocutaneous flap. In complex head and neck defects following a total parotidectomy, the extended free fasciocutaneous ALT (anterior-lateral thigh flap with a vascularized nerve graft is ideally suited for the reconstruction of the injured site.  Donor–site morbidity is low and additional surgical time is minimal

  3. Promising Technique for Facial Nerve Reconstruction in Extended Parotidectomy.

    Science.gov (United States)

    Villarreal, Ithzel Maria; Rodríguez-Valiente, Antonio; Castelló, Jose Ramon; Górriz, Carmen; Montero, Oscar Alvarez; García-Berrocal, Jose Ramon

    2015-11-01

    Malignant tumors of the parotid gland account scarcely for 5% of all head and neck tumors. Most of these neoplasms have a high tendency for recurrence, local infiltration, perineural extension, and metastasis. Although uncommon, these malignant tumors require complex surgical treatment sometimes involving a total parotidectomy including a complete facial nerve resection. Severe functional and aesthetic facial defects are the result of a complete sacrifice or injury to isolated branches becoming an uncomfortable distress for patients and a major challenge for reconstructive surgeons. A case of a 54-year-old, systemically healthy male patient with a 4 month complaint of pain and swelling on the right side of the face is presented. The patient reported a rapid increase in the size of the lesion over the past 2 months. Imaging tests and histopathological analysis reported an adenoid cystic carcinoma. A complete parotidectomy was carried out with an intraoperative notice of facial nerve infiltration requiring a second intervention for nerve and defect reconstruction. A free ALT flap with vascularized nerve grafts was the surgical choice. A 6 month follow-up showed partial facial movement recovery and the facial defect mended. It is of critical importance to restore function to patients with facial nerve injury. Vascularized nerve grafts, in many clinical and experimental studies, have shown to result in better nerve regeneration than conventional non-vascularized nerve grafts. Nevertheless, there are factors that may affect the degree, speed and regeneration rate regarding the free fasciocutaneous flap. In complex head and neck defects following a total parotidectomy, the extended free fasciocutaneous ALT (anterior-lateral thigh) flap with a vascularized nerve graft is ideally suited for the reconstruction of the injured site. Donor-site morbidity is low and additional surgical time is minimal compared with the time of a single ALT flap transfer.

  4. Looking beyond the face: a training to improve perceivers' impressions of people with facial paralysis.

    Science.gov (United States)

    Bogart, Kathleen R; Tickle-Degnen, Linda

    2015-02-01

    Healthcare providers and lay people alike tend to form inaccurate first impressions of people with facial movement disorders such as facial paralysis (FP) because of the natural tendency to base impressions on the face. This study tested the effectiveness of the first interpersonal sensitivity training for FP. Undergraduate participants were randomly assigned to one of two training conditions or an untrained control. Education raised awareness about FP symptoms and experiences and instructed participants to form their impressions based on cues from the body and voice rather than the face. Education+feedback added feedback about the correctness of participants' judgments. Subsequently, participants watched 30s video clips of people with FP and rated their extraversion. Participants' bias and accuracy in the two training conditions did not significantly differ, but they were significantly less biased than controls. Training did not improve the more challenging task of accurately detecting individual differences in extraversion. Educating people improves bias, but not accuracy, of impressions of people with FP. Information from the education condition could be delivered in a pamphlet to those likely to interact with this population such as healthcare providers and educators. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  5. The influence of different facial components on facial aesthetics.

    NARCIS (Netherlands)

    Faure, J.C.; Rieffe, C.; Maltha, J.C.

    2002-01-01

    Facial aesthetics have an important influence on social behaviour and perception in our society. The purpose of the present study was to evaluate the effect of facial symmetry and inter-ocular distance on the assessment of facial aesthetics, factors that are often suggested as major contributors to

  6. Gender and the capacity to identify facial emotional expressions

    Directory of Open Access Journals (Sweden)

    Carolina Baptista Menezes

    Full Text Available Recognizing emotional expressions is enabled by a fundamental sociocognitive mechanism of human nature. This study compared 114 women and 104 men on the identification of basic emotions on a recognition task that used culturally adapted and validated faces to the Brazilian context. It was also investigated whether gender differences on emotion recognition would vary according to different exposure times. Women were generally better at detecting facial expressions, but an interaction suggested that the female superiority was particularly observed for anger, disgust, and surprise; results did not change according to age or time exposure. However, regardless of sex, total accuracy improved as presentation times increased, but only fear and anger significantly differed between the presentation times. Hence, in addition to the support of the evolutionary hypothesis of the female superiority in detecting facial expressions of emotions, recognition of facial expressions also depend on the time available to correctly identify an expression.

  7. The role of great auricular-facial nerve neurorrhaphy in facial nerve damage

    OpenAIRE

    Sun, Yan; Liu, Limei; Han, Yuechen; Xu, Lei; Zhang, Daogong; Wang, Haibo

    2015-01-01

    Background: Facial nerve is easy to be damaged, and there are many reconstructive methods for facial nerve reconstructive, such as facial nerve end to end anastomosis, the great auricular nerve graft, the sural nerve graft, or hypoglossal-facial nerve anastomosis. However, there is still little study about great auricular-facial nerve neurorrhaphy. The aim of the present study was to identify the role of great auricular-facial nerve neurorrhaphy and the mechanism. Methods: Rat models of facia...

  8. Facial soft tissue analysis among various vertical facial patterns

    International Nuclear Information System (INIS)

    Jeelani, W.; Fida, M.; Shaikh, A.

    2016-01-01

    Background: The emergence of soft tissue paradigm in orthodontics has made various soft tissue parameters an integral part of the orthodontic problem list. The purpose of this study was to determine and compare various facial soft tissue parameters on lateral cephalograms among patients with short, average and long facial patterns. Methods: A cross-sectional study was conducted on the lateral cephalograms of 180 adult subjects divided into three equal groups, i.e., short, average and long face according to the vertical facial pattern. Incisal display at rest, nose height, upper and lower lip lengths, degree of lip procumbency and the nasolabial angle were measured for each individual. The gender differences for these soft tissue parameters were determined using Mann-Whitney U test while the comparison among different facial patterns was performed using Kruskal-Wallis test. Results: Significant differences in the incisal display at rest, total nasal height, lip procumbency, the nasolabial angle and the upper and lower lip lengths were found among the three vertical facial patterns. A significant positive correlation of nose and lip dimensions was found with the underlying skeletal pattern. Similarly, the incisal display at rest, upper and lower lip procumbency and the nasolabial angle were significantly correlated with the lower anterior facial height. Conclusion: Short facial pattern is associated with minimal incisal display, recumbent upper and lower lips and acute nasolabial angle while the long facial pattern is associated with excessive incisal display, procumbent upper and lower lips and obtuse nasolabial angle. (author)

  9. Dynamic Facial Prosthetics for Sufferers of Facial Paralysis

    Directory of Open Access Journals (Sweden)

    Fergal Coulter

    2011-10-01

    Full Text Available BackgroundThis paper discusses the various methods and the materialsfor the fabrication of active artificial facial muscles. Theprimary use for these will be the reanimation of paralysedor atrophied muscles in sufferers of non-recoverableunilateral facial paralysis.MethodThe prosthetic solution described in this paper is based onsensing muscle motion of the contralateral healthy musclesand replicating that motion across a patient’s paralysed sideof the face, via solid state and thin film actuators. Thedevelopment of this facial prosthetic device focused onrecreating a varying intensity smile, with emphasis ontiming, displacement and the appearance of the wrinklesand folds that commonly appear around the nose and eyesduring the expression.An animatronic face was constructed with actuations beingmade to a silicone representation musculature, usingmultiple shape-memory alloy cascades. Alongside theartificial muscle physical prototype, a facial expressionrecognition software system was constructed. This formsthe basis of an automated calibration and reconfigurationsystem for the artificial muscles following implantation, soas to suit the implantee’s unique physiognomy.ResultsAn animatronic model face with silicone musculature wasdesigned and built to evaluate the performance of ShapeMemory Alloy artificial muscles, their power controlcircuitry and software control systems. A dual facial motionsensing system was designed to allow real time control overmodel – a piezoresistive flex sensor to measure physicalmotion, and a computer vision system to evaluate real toartificial muscle performance.Analysis of various facial expressions in real subjects wasmade, which give useful data upon which to base thesystems parameter limits.ConclusionThe system performed well, and the various strengths andshortcomings of the materials and methods are reviewedand considered for the next research phase, when newpolymer based artificial muscles are constructed

  10. Facial Fractures.

    Science.gov (United States)

    Ghosh, Rajarshi; Gopalkrishnan, Kulandaswamy

    2018-06-01

    The aim of this study is to retrospectively analyze the incidence of facial fractures along with age, gender predilection, etiology, commonest site, associated dental injuries, and any complications of patients operated in Craniofacial Unit of SDM College of Dental Sciences and Hospital. This retrospective study was conducted at the Department of OMFS, SDM College of Dental Sciences, Dharwad from January 2003 to December 2013. Data were recorded for the cause of injury, age and gender distribution, frequency and type of injury, localization and frequency of soft tissue injuries, dentoalveolar trauma, facial bone fractures, complications, concomitant injuries, and different treatment protocols.All the data were analyzed using statistical analysis that is chi-squared test. A total of 1146 patients reported at our unit with facial fractures during these 10 years. Males accounted for a higher frequency of facial fractures (88.8%). Mandible was the commonest bone to be fractured among all the facial bones (71.2%). Maxillary central incisors were the most common teeth to be injured (33.8%) and avulsion was the most common type of injury (44.6%). Commonest postoperative complication was plate infection (11%) leading to plate removal. Other injuries associated with facial fractures were rib fractures, head injuries, upper and lower limb fractures, etc., among these rib fractures were seen most frequently (21.6%). This study was performed to compare the different etiologic factors leading to diverse facial fracture patterns. By statistical analysis of this record the authors come to know about the relationship of facial fractures with gender, age, associated comorbidities, etc.

  11. Movement and respiration detection using statistical properties of the FMCW radar signal

    KAUST Repository

    Kiuru, Tero; Metso, Mikko; Jardak, Seifallah; Pursula, Pekka; Hakli, Janne; Hirvonen, Mervi; Sepponen, Raimo

    2016-01-01

    This paper presents a 24 GHz FMCW radar system for detection of movement and respiration using change in the statistical properties of the received radar signal, both amplitude and phase. We present the hardware and software segments of the radar

  12. DETECTION OF SLOPE MOVEMENT BY COMPARING POINT CLOUDS CREATED BY SFM SOFTWARE

    Directory of Open Access Journals (Sweden)

    K. Oda

    2016-06-01

    Full Text Available This paper proposes movement detection method between point clouds created by SFM software, without setting any onsite georeferenced points. SfM software, like Smart3DCaputure, PhotoScan, and Pix4D, are convenient for non-professional operator of photogrammetry, because these systems require simply specification of sequence of photos and output point clouds with colour index which corresponds to the colour of original image pixel where the point is projected. SfM software can execute aerial triangulation and create dense point clouds fully automatically. This is useful when monitoring motion of unstable slopes, or loos rocks in slopes along roads or railroads. Most of existing method, however, uses mesh-based DSM for comparing point clouds before/after movement and it cannot be applied in such cases that part of slopes forms overhangs. And in some cases movement is smaller than precision of ground control points and registering two point clouds with GCP is not appropriate. Change detection method in this paper adopts CCICP (Classification and Combined ICP algorithm for registering point clouds before / after movement. The CCICP algorithm is a type of ICP (Iterative Closest Points which minimizes point-to-plane, and point-to-point distances, simultaneously, and also reject incorrect correspondences based on point classification by PCA (Principle Component Analysis. Precision test shows that CCICP method can register two point clouds up to the 1 pixel size order in original images. Ground control points set in site are useful for initial setting of two point clouds. If there are no GCPs in site of slopes, initial setting is achieved by measuring feature points as ground control points in the point clouds before movement, and creating point clouds after movement with these ground control points. When the motion is rigid transformation, in case that a loose Rock is moving in slope, motion including rotation can be analysed by executing CCICP for a

  13. Facial Expression Recognition By Using Fisherface Methode With Backpropagation Neural Network

    Directory of Open Access Journals (Sweden)

    Zaenal Abidin

    2011-01-01

    Full Text Available Abstract— In daily lives, especially in interpersonal communication, face often used for expression. Facial expressions give information about the emotional state of the person. A facial expression is one of the behavioral characteristics. The components of a basic facial expression analysis system are face detection, face data extraction, and facial expression recognition. Fisherface method with backpropagation artificial neural network approach can be used for facial expression recognition. This method consists of two-stage process, namely PCA and LDA. PCA is used to reduce the dimension, while the LDA is used for features extraction of facial expressions. The system was tested with 2 databases namely JAFFE database and MUG database. The system correctly classified the expression with accuracy of 86.85%, and false positive 25 for image type I of JAFFE, for image type II of JAFFE 89.20% and false positive 15,  for type III of JAFFE 87.79%, and false positive for 16. The image of MUG are 98.09%, and false positive 5. Keywords— facial expression, fisherface method, PCA, LDA, backpropagation neural network.

  14. Detecting deception in movement: the case of the side-step in rugby.

    Directory of Open Access Journals (Sweden)

    Sébastien Brault

    Full Text Available Although coordinated patterns of body movement can be used to communicate action intention, they can also be used to deceive. Often known as deceptive movements, these unpredictable patterns of body movement can give a competitive advantage to an attacker when trying to outwit a defender. In this particular study, we immersed novice and expert rugby players in an interactive virtual rugby environment to understand how the dynamics of deceptive body movement influence a defending player's decisions about how and when to act. When asked to judge final running direction, expert players who were found to tune into prospective tau-based information specified in the dynamics of 'honest' movement signals (Centre of Mass, performed significantly better than novices who tuned into the dynamics of 'deceptive' movement signals (upper trunk yaw and out-foot placement (p<.001. These findings were further corroborated in a second experiment where players were able to move as if to intercept or 'tackle' the virtual attacker. An analysis of action responses showed that experts waited significantly longer before initiating movement (p<.001. By waiting longer and picking up more information that would inform about future running direction these experts made significantly fewer errors (p<.05. In this paper we not only present a mathematical model that describes how deception in body-based movement is detected, but we also show how perceptual expertise is manifested in action expertise. We conclude that being able to tune into the 'honest' information specifying true running action intention gives a strong competitive advantage.

  15. Masseteric nerve for reanimation of the smile in short-term facial paralysis.

    Science.gov (United States)

    Hontanilla, Bernardo; Marre, Diego; Cabello, Alvaro

    2014-02-01

    Our aim was to describe our experience with the masseteric nerve in the reanimation of short term facial paralysis. We present our outcomes using a quantitative measurement system and discuss its advantages and disadvantages. Between 2000 and 2012, 23 patients had their facial paralysis reanimated by masseteric-facial coaptation. All patients are presented with complete unilateral paralysis. Their background, the aetiology of the paralysis, and the surgical details were recorded. A retrospective study of movement analysis was made using an automatic optical system (Facial Clima). Commissural excursion and commissural contraction velocity were also recorded. The mean age at reanimation was 43(8) years. The aetiology of the facial paralysis included acoustic neurinoma, fracture of the skull base, schwannoma of the facial nerve, resection of a cholesteatoma, and varicella zoster infection. The mean time duration of facial paralysis was 16(5) months. Follow-up was more than 2 years in all patients except 1 in whom it was 12 months. The mean duration to recovery of tone (as reported by the patient) was 67(11) days. Postoperative commissural excursion was 8(4)mm for the reanimated side and 8(3)mm for the healthy side (p=0.4). Likewise, commissural contraction velocity was 38(10)mm/s for the reanimated side and 43(12)mm/s for the healthy side (p=0.23). Mean percentage of recovery was 92(5)mm for commissural excursion and 79(15)mm/s for commissural contraction velocity. Masseteric nerve transposition is a reliable and reproducible option for the reanimation of short term facial paralysis with reduced donor site morbidity and good symmetry with the opposite healthy side. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  16. Facial expression: An under-utilised tool for the assessment of welfare in mammals.

    Science.gov (United States)

    Descovich, Kris A; Wathan, Jennifer; Leach, Matthew C; Buchanan-Smith, Hannah M; Flecknell, Paul; Farningham, David; Vick, Sarah-Jane

    2017-01-01

    Animal welfare is a key issue for industries that use or impact upon animals. The accurate identification of welfare states is particularly relevant to the field of bioscience, where the 3Rs framework encourages refinement of experimental procedures involving animal models. The assessment and improvement of welfare states in animals depends on reliable and valid measurement tools. Behavioral measures (activity, attention, posture and vocalization) are frequently used because they are immediate and non-invasive, however no single indicator can yield a complete picture of the internal state of an animal. Facial expressions are extensively studied in humans as a measure of psychological and emotional experiences but are infrequently used in animal studies, with the exception of emerging research on pain behavior. In this review, we discuss current evidence for facial representations of underlying affective states, and how communicative or functional expressions can be useful within welfare assessments. Validated tools for measuring facial movement are outlined, and the potential of expressions as honest signals is discussed, alongside other challenges and limitations to facial expression measurement within the context of animal welfare. We conclude that facial expression determination in animals is a useful but underutilized measure that complements existing tools in the assessment of welfare.

  17. Guillain-Barré Syndrome: A Variant Consisting of Facial Diplegia and Paresthesia with Left Facial Hemiplegia Associated with Antibodies to Galactocerebroside and Phosphatidic Acid.

    Science.gov (United States)

    Nishiguchi, Sho; Branch, Joel; Tsuchiya, Tsubasa; Ito, Ryoji; Kawada, Junya

    2017-10-02

    BACKGROUND A rare variant of Guillain-Barré syndrome (GBS) consists of facial diplegia and paresthesia, but an even more rare association is with facial hemiplegia, similar to Bell's palsy. This case report is of this rare variant of GBS that was associated with IgG antibodies to galactocerebroside and phosphatidic acid. CASE REPORT A 54-year-old man presented with lower left facial palsy and paresthesia of his extremities, following an upper respiratory tract infection. Physical examination confirmed lower left facial palsy and paresthesia of his extremities with hyporeflexia of his lower limbs and sensory loss of all four extremities. The differential diagnosis was between a variant of GBS and Bell's palsy. Following initial treatment with glucocorticoids followed by intravenous immunoglobulin (IVIG), his sensory abnormalities resolved. Serum IgG antibodies to galactocerebroside and phosphatidic acid were positive in this patient, but not other antibodies to glycolipids or phospholipids were found. Five months following discharge from hospital, his left facial palsy had improved. CONCLUSIONS A case of a rare variant of GBS is presented with facial diplegia and paresthesia and with unilateral facial palsy. This rare variant of GBS may which may mimic Bell's palsy. In this case, IgG antibodies to galactocerebroside and phosphatidic acid were detected.

  18. Adolescents with HIV and facial lipoatrophy: response to facial stimulation

    Directory of Open Access Journals (Sweden)

    Jesus Claudio Gabana-Silveira

    2014-08-01

    Full Text Available OBJECTIVES: This study evaluated the effects of facial stimulation over the superficial muscles of the face in individuals with facial lipoatrophy associated with human immunodeficiency virus (HIV and with no indication for treatment with polymethyl methacrylate. METHOD: The study sample comprised four adolescents of both genders ranging from 13 to 17 years in age. To participate in the study, the participants had to score six or less points on the Facial Lipoatrophy Index. The facial stimulation program used in our study consisted of 12 weekly 30-minute sessions during which individuals received therapy. The therapy consisted of intra- and extra-oral muscle contraction and stretching maneuvers of the zygomaticus major and minor and the masseter muscles. Pre- and post-treatment results were obtained using anthropometric static measurements of the face and the Facial Lipoatrophy Index. RESULTS: The results suggest that the therapeutic program effectively improved the volume of the buccinators. No significant differences were observed for the measurements of the medial portion of the face, the lateral portion of the face, the volume of the masseter muscle, or Facial Lipoatrophy Index scores. CONCLUSION: The results of our study suggest that facial maneuvers applied to the superficial muscles of the face of adolescents with facial lipoatrophy associated with HIV improved the facial area volume related to the buccinators muscles. We believe that our results will encourage future research with HIV patients, especially for patients who do not have the possibility of receiving an alternative aesthetic treatment.

  19. Orangutans modify facial displays depending on recipient attention

    Directory of Open Access Journals (Sweden)

    Bridget M. Waller

    2015-03-01

    Full Text Available Primate facial expressions are widely accepted as underpinned by reflexive emotional processes and not under voluntary control. In contrast, other modes of primate communication, especially gestures, are widely accepted as underpinned by intentional, goal-driven cognitive processes. One reason for this distinction is that production of primate gestures is often sensitive to the attentional state of the recipient, a phenomenon used as one of the key behavioural criteria for identifying intentionality in signal production. The reasoning is that modifying/producing a signal when a potential recipient is looking could demonstrate that the sender intends to communicate with them. Here, we show that the production of a primate facial expression can also be sensitive to the attention of the play partner. Using the orangutan (Pongo pygmaeus Facial Action Coding System (OrangFACS, we demonstrate that facial movements are more intense and more complex when recipient attention is directed towards the sender. Therefore, production of the playface is not an automated response to play (or simply a play behaviour itself and is instead produced flexibly depending on the context. If sensitivity to attentional stance is a good indicator of intentionality, we must also conclude that the orangutan playface is intentionally produced. However, a number of alternative, lower level interpretations for flexible production of signals in response to the attention of another are discussed. As intentionality is a key feature of human language, claims of intentional communication in related primate species are powerful drivers in language evolution debates, and thus caution in identifying intentionality is important.

  20. A facial marker in facial wasting rehabilitation.

    Science.gov (United States)

    Rauso, Raffaele; Tartaro, Gianpaolo; Freda, Nicola; Rusciani, Antonio; Curinga, Giuseppe

    2012-02-01

    Facial lipoatrophy is one of the most distressing manifestation for HIV patients. It can be stigmatizing, severely affecting quality of life and self-esteem, and it may result in reduced antiretroviral adherence. Several filling techniques have been proposed in facial wasting restoration, with different outcomes. The aim of this study is to present a triangular area that is useful to fill in facial wasting rehabilitation. Twenty-eight HIV patients rehabilitated for facial wasting were enrolled in this study. Sixteen were rehabilitated with a non-resorbable filler and twelve with structural fat graft harvested from lipohypertrophied areas. A photographic pre-operative and post-operative evaluation was performed by the patients and by two plastic surgeons who were "blinded." The filled area, in both patients rehabilitated with structural fat grafts or non-resorbable filler, was a triangular area of depression identified between the nasolabial fold, the malar arch, and the line that connects these two anatomical landmarks. The cosmetic result was evaluated after three months after the last filling procedure in the non-resorbable filler group and after three months post-surgery in the structural fat graft group. The mean patient satisfaction score was 8.7 as assessed with a visual analogue scale. The mean score for blinded evaluators was 7.6. In this study the authors describe a triangular area of the face, between the nasolabial fold, the malar arch, and the line that connects these two anatomical landmarks, where a good aesthetic facial restoration in HIV patients with facial wasting may be achieved regardless of which filling technique is used.

  1. Complex Odontome Causing Facial Asymmetry

    Directory of Open Access Journals (Sweden)

    Karthikeya Patil

    2006-01-01

    Full Text Available Odontomas are the most common non-cystic odontogenic lesions representing 70% of all odontogenic tumors. Often small and asymptomatic, they are detected on routine radiographs. Occasionally they become large and produce expansion of bone with consequent facial asymmetry. We report a case of such a lesion causing expansion of the mandible in an otherwise asymptomatic patient.

  2. [Facial nerve neurinomas].

    Science.gov (United States)

    Sokołowski, Jacek; Bartoszewicz, Robert; Morawski, Krzysztof; Jamróz, Barbara; Niemczyk, Kazimierz

    2013-01-01

    Evaluation of diagnostic, surgical technique, treatment results facial nerve neurinomas and its comparison with literature was the main purpose of this study. Seven cases of patients (2005-2011) with facial nerve schwannomas were included to retrospective analysis in the Department of Otolaryngology, Medical University of Warsaw. All patients were assessed with history of the disease, physical examination, hearing tests, computed tomography and/or magnetic resonance imaging, electronystagmography. Cases were observed in the direction of potential complications and recurrences. Neurinoma of the facial nerve occurred in the vertical segment (n=2), facial nerve geniculum (n=1) and the internal auditory canal (n=4). The symptoms observed in patients were analyzed: facial nerve paresis (n=3), hearing loss (n=2), dizziness (n=1). Magnetic resonance imaging and computed tomography allowed to confirm the presence of the tumor and to assess its staging. Schwannoma of the facial nerve has been surgically removed using the middle fossa approach (n=5) and by antromastoidectomy (n=2). Anatomical continuity of the facial nerve was achieved in 3 cases. In the twelve months after surgery, facial nerve paresis was rated at level II-III° HB. There was no recurrence of the tumor in radiological observation. Facial nerve neurinoma is a rare tumor. Currently surgical techniques allow in most cases, the radical removing of the lesion and reconstruction of the VII nerve function. The rate of recurrence is low. A tumor of the facial nerve should be considered in the differential diagnosis of nerve VII paresis. Copyright © 2013 Polish Otorhinolaryngology - Head and Neck Surgery Society. Published by Elsevier Urban & Partner Sp. z.o.o. All rights reserved.

  3. Contralateral botulinum toxin injection to improve facial asymmetry after acute facial paralysis.

    Science.gov (United States)

    Kim, Jin

    2013-02-01

    The application of botulinum toxin to the healthy side of the face in patients with long-standing facial paralysis has been shown to be a minimally invasive technique that improves facial symmetry at rest and during facial motion, but our experience using botulinum toxin therapy for facial sequelae prompted the idea that botulinum toxin might be useful in acute cases of facial paralysis, leading to improve facial asymmetry. In cases in which medical or surgical treatment options are limited because of existing medical problems or advanced age, most patients with acute facial palsy are advised to await spontaneous recovery or are informed that no effective intervention exists. The purpose of this study was to evaluate the effect of botulinum toxin treatment for facial asymmetry in 18 patients after acute facial palsy who could not be optimally treated by medical or surgical management because of severe medical or other problems. From 2009 to 2011, nine patients with Bell's palsy, 5 with herpes zoster oticus and 4 with traumatic facial palsy (10 men and 8 women; age range, 22-82 yr; mean, 50.8 yr) participated in this study. Botulinum toxin A (Botox; Allergan Incorporated, Irvine, CA, USA) was injected using a tuberculin syringe with a 27-gauge needle. The amount injected per site varied from 2.5 to 3 U, and the total dose used per patient was 32 to 68 U (mean, 47.5 +/- 8.4 U). After administration of a single dose of botulinum toxin A on the nonparalyzed side of 18 patients with acute facial paralysis, marked relief of facial asymmetry was observed in 8 patients within 1 month of injection. Decreased facial asymmetry and strengthened facial function on the paralyzed side led to an increased HB and SB grade within 6 months after injection. Use of botulinum toxin after acute facial palsy cases is of great value. Such therapy decreases the relative hyperkinesis contralateral to the paralysis, leading to greater symmetric function. Especially in patients with medical

  4. Rejuvenecimiento facial

    Directory of Open Access Journals (Sweden)

    L. Daniel Jacubovsky, Dr.

    2010-01-01

    Full Text Available El envejecimiento facial es un proceso único y particular a cada individuo y está regido en especial por su carga genética. El lifting facial es una compleja técnica desarrollada en nuestra especialidad desde principios de siglo, para revertir los principales signos de este proceso. Los factores secundarios que gravitan en el envejecimiento facial son múltiples y por ello las ritidectomías o lifting cérvico faciales descritas han buscado corregir los cambios fisonómicos del envejecimiento excursionando, como se describe, en todos los planos tisulares involucrados. Esta cirugía por lo tanto, exige conocimiento cabal de la anatomía quirúrgica, pericia y experiencia para reducir las complicaciones, estigmas quirúrgicos y revisiones secundarias. La ridectomía facial ha evolucionado hacia un procedimiento más simple, de incisiones más cortas y disecciones menos extensas. Las suspensiones musculares han variado en su ejecución y los vectores de montaje y resección cutánea son cruciales en los resultados estéticos de la cirugía cérvico facial. Hoy estos vectores son de tracción más vertical. La corrección de la flaccidez va acompañada de un interés en reponer el volumen de la superficie del rostro, en especial el tercio medio. Las técnicas quirúrgicas de rejuvenecimiento, en especial el lifting facial, exigen una planificación para cada paciente. Las técnicas adjuntas al lifting, como blefaroplastias, mentoplastía, lipoaspiración de cuello, implantes faciales y otras, también han tenido una positiva evolución hacia la reducción de riesgos y mejor éxito estético.

  5. Quantitative facial asymmetry: using three-dimensional photogrammetry to measure baseline facial surface symmetry.

    Science.gov (United States)

    Taylor, Helena O; Morrison, Clinton S; Linden, Olivia; Phillips, Benjamin; Chang, Johnny; Byrne, Margaret E; Sullivan, Stephen R; Forrest, Christopher R

    2014-01-01

    Although symmetry is hailed as a fundamental goal of aesthetic and reconstructive surgery, our tools for measuring this outcome have been limited and subjective. With the advent of three-dimensional photogrammetry, surface geometry can be captured, manipulated, and measured quantitatively. Until now, few normative data existed with regard to facial surface symmetry. Here, we present a method for reproducibly calculating overall facial symmetry and present normative data on 100 subjects. We enrolled 100 volunteers who underwent three-dimensional photogrammetry of their faces in repose. We collected demographic data on age, sex, and race and subjectively scored facial symmetry. We calculated the root mean square deviation (RMSD) between the native and reflected faces, reflecting about a plane of maximum symmetry. We analyzed the interobserver reliability of the subjective assessment of facial asymmetry and the quantitative measurements and compared the subjective and objective values. We also classified areas of greatest asymmetry as localized to the upper, middle, or lower facial thirds. This cluster of normative data was compared with a group of patients with subtle but increasing amounts of facial asymmetry. We imaged 100 subjects by three-dimensional photogrammetry. There was a poor interobserver correlation between subjective assessments of asymmetry (r = 0.56). There was a high interobserver reliability for quantitative measurements of facial symmetry RMSD calculations (r = 0.91-0.95). The mean RMSD for this normative population was found to be 0.80 ± 0.24 mm. Areas of greatest asymmetry were distributed as follows: 10% upper facial third, 49% central facial third, and 41% lower facial third. Precise measurement permitted discrimination of subtle facial asymmetry within this normative group and distinguished norms from patients with subtle facial asymmetry, with placement of RMSDs along an asymmetry ruler. Facial surface symmetry, which is poorly assessed

  6. Asians' Facial Responsiveness to Basic Tastes by Automated Facial Expression Analysis System.

    Science.gov (United States)

    Zhi, Ruicong; Cao, Lianyu; Cao, Gang

    2017-03-01

    Growing evidence shows that consumer choices in real life are mostly driven by unconscious mechanisms rather than conscious. The unconscious process could be measured by behavioral measurements. This study aims to apply automatic facial expression analysis technique for consumers' emotion representation, and explore the relationships between sensory perception and facial responses. Basic taste solutions (sourness, sweetness, bitterness, umami, and saltiness) with 6 levels plus water were used, which could cover most of the tastes found in food and drink. The other contribution of this study is to analyze the characteristics of facial expressions and correlation between facial expressions and perceptive hedonic liking for Asian consumers. Up until now, the facial expression application researches only reported for western consumers, while few related researches investigated the facial responses during food consuming for Asian consumers. Experimental results indicated that facial expressions could identify different stimuli with various concentrations and different hedonic levels. The perceived liking increased at lower concentrations and decreased at higher concentrations, while samples with medium concentrations were perceived as the most pleasant except sweetness and bitterness. High correlations were founded between perceived intensities of bitterness, umami, saltiness, and facial reactions of disgust and fear. Facial expression disgust and anger could characterize emotion "dislike," and happiness could characterize emotion "like," while neutral could represent "neither like nor dislike." The identified facial expressions agree with the perceived sensory emotions elicited by basic taste solutions. The correlation analysis between hedonic levels and facial expression intensities obtained in this study are in accordance with that discussed for western consumers. © 2017 Institute of Food Technologists®.

  7. A New Approach to Measuring Individual Differences in Sensitivity to Facial Expressions: Influence of Temperamental Shyness and Sociability

    Directory of Open Access Journals (Sweden)

    Xiaoqing eGao

    2014-02-01

    Full Text Available To examine individual differences in adults’ sensitivity to facial expressions, we used a novel method that has proved revealing in studies of developmental change. Using static faces morphed to show different intensities of facial expressions, we calculated two measures: (1 the threshold to detect that a low intensity facial expression is different from neutral, and (2 accuracy in recognizing the specific facial expression in faces above the detection threshold. We conducted two experiments with young adult females varying in reported temperamental shyness and sociability - the former trait is known to influence the recognition of facial expressions during childhood. In both experiments, the measures had good split half reliability. Because shyness was significantly negatively correlated with sociability, we used partial correlations to examine the relation of each to sensitivity to facial expression. Sociability was negatively related to threshold to detect fear (Experiment 1 and to misidentify fear as another expression or happy expressions as fear (Experiment 2. Both patterns are consistent with hypervigilance by less sociable individuals. Shyness was positively related to misidentification of fear as another emotion (Experiment 2, a pattern consistent with a history of avoidance. We discuss the advantages and limitations of this new approach for studying individual differences in sensitivity to facial expression.

  8. Retrospective case series of the imaging findings of facial nerve hemangioma.

    Science.gov (United States)

    Yue, Yunlong; Jin, Yanfang; Yang, Bentao; Yuan, Hui; Li, Jiandong; Wang, Zhenchang

    2015-09-01

    The aim was to compare high-resolution computed tomography (HRCT) and thin-section magnetic resonance imaging (MRI) findings of facial nerve hemangioma. The HRCT and MRI characteristics of 17 facial nerve hemangiomas diagnosed between 2006 and 2013 were retrospectively analyzed. All patients included in the study suffered from a space-occupying lesion of soft tissues at the geniculate ganglion fossa. Affected nerve was compared for size and shape with the contralateral unaffected nerve. HRCT showed irregular expansion and broadening of the facial nerve canal, damage of the bone wall and destruction of adjacent bone, with "point"-like or "needle"-like calcifications in 14 cases. The average CT value was 320.9 ± 141.8 Hu. Fourteen patients had a widened labyrinthine segment; 6/17 had a tympanic segment widening; 2/17 had a greater superficial petrosal nerve canal involvement, and 2/17 had an affected internal auditory canal (IAC) segment. On MRI, all lesions were significantly enhanced due to high blood supply. Using 2D FSE T2WI, the lesion detection rate was 82.4 % (14/17). 3D fast imaging employing steady-state acquisition (3D FIESTA) revealed the lesions in all patients. HRCT showed that the average number of involved segments in the facial nerve canal was 2.41, while MRI revealed an average of 2.70 segments (P facial nerve hemangioma were typical, revealing irregular masses growing along the facial nerve canal, with calcifications and rich blood supply. Thin-section enhanced MRI was more accurate in lesion detection and assessment compared with HRCT.

  9. Facial exercises for facial rejuvenation: a control group study.

    Science.gov (United States)

    De Vos, Marie-Camille; Van den Brande, Helen; Boone, Barbara; Van Borsel, John

    2013-01-01

    Facial exercises are a noninvasive alternative to medical approaches to facial rejuvenation. Logopedists could be involved in providing these exercises. Little research has been conducted, however, on the effectiveness of exercises for facial rejuvenation. This study assessed the effectiveness of 4 exercises purportedly reducing wrinkles and sagging of the facial skin. A control group study was conducted with 18 participants, 9 of whom (the experimental group) underwent daily training for 7 weeks. Pictures taken before and after 7 weeks of 5 facial areas (forehead, nasolabial folds, area above the upper lip, jawline and area under the chin) were evaluated by a panel of laypersons. In addition, the participants of the experimental group evaluated their own pictures. Evaluation included the pairwise presentation of pictures before and after 7 weeks and scoring of the same pictures by means of visual analogue scales in a random presentation. Only one significant difference was found between the control and experimental group. In the experimental group, the picture after therapy of the upper lip was more frequently chosen to be the younger-looking one by the panel. It cannot be concluded that facial exercises are effective. More systematic research is needed. © 2013 S. Karger AG, Basel.

  10. Avaliação comparativa entre agradabilidade facial e análise subjetiva do Padrão Facial Comparative evaluation among facial attractiveness and subjective analysis of Facial Pattern

    Directory of Open Access Journals (Sweden)

    Olívia Morihisa

    2009-12-01

    Full Text Available OBJETIVO: estudar duas análises subjetivas faciais utilizadas para o diagnóstico ortodôntico, avaliação da agradabilidade facial e definição de Padrão Facial, e verificar a associação existente entre elas. MÉTODOS: utilizou-se 208 fotografias faciais padronizadas (104 laterais e 104 frontais de 104 indivíduos escolhidos aleatoriamente, as quais foram submetidas à avaliação da agradabilidade por dois grupos distintos (Grupo " Ortodontia" e Grupo " Leigos" , que classificaram os indivíduos em " agradável" , " aceitável" ou " desagradável" . Os indivíduos também foram classificados quanto ao Padrão Facial por três examinadores calibrados, utilizando-se apenas a vista lateral. RESULTADOS E CONCLUSÃO: após a análise estatística, verificou-se que houve associação fortemente positiva entre a agradabilidade facial e o Padrão Facial para a norma lateral, porém não para a frontal, em que os indivíduos tenderam a ser bem classificados mesmo no Padrão II.AIM: To study two subjective facial analysis commonly used on orthodontic diagnosis and to verify the association between the evaluation of facial attractiveness and Facial Pattern definition. METHODS: Two hundred and eight standardized face photographs (104 in lateral view and 104 in frontal view of 104 randomly chosen individuals were used in the present study. They were classified as " pleasant" , " acceptable" and " not pleasant" by two distinct groups: " Lay people" and " Orthodontists" . The individuals were either classified according to their Facial Pattern using lateral view images. RESULTS AND CONCLUSION: After statistical analysis, it was noted a strong positive concordance between facial attractiveness in lateral view and Facial Pattern, however, frontal view attractiveness classification did not have good concordance with Facial Pattern, tending to have good attractiveness classification even in Facial Pattern II.

  11. Intact mirror mechanisms for automatic facial emotions in children and adolescents with autism spectrum disorder.

    Science.gov (United States)

    Schulte-Rüther, Martin; Otte, Ellen; Adigüzel, Kübra; Firk, Christine; Herpertz-Dahlmann, Beate; Koch, Iring; Konrad, Kerstin

    2017-02-01

    It has been suggested that an early deficit in the human mirror neuron system (MNS) is an important feature of autism. Recent findings related to simple hand and finger movements do not support a general dysfunction of the MNS in autism. Studies investigating facial actions (e.g., emotional expressions) have been more consistent, however, mostly relied on passive observation tasks. We used a new variant of a compatibility task for the assessment of automatic facial mimicry responses that allowed for simultaneous control of attention to facial stimuli. We used facial electromyography in 18 children and adolescents with Autism spectrum disorder (ASD) and 18 typically developing controls (TDCs). We observed a robust compatibility effect in ASD, that is, the execution of a facial expression was facilitated if a congruent facial expression was observed. Time course analysis of RT distributions and comparison to a classic compatibility task (symbolic Simon task) revealed that the facial compatibility effect appeared early and increased with time, suggesting fast and sustained activation of motor codes during observation of facial expressions. We observed a negative correlation of the compatibility effect with age across participants and in ASD, and a positive correlation between self-rated empathy and congruency for smiling faces in TDC but not in ASD. This pattern of results suggests that basic motor mimicry is intact in ASD, but is not associated with complex social cognitive abilities such as emotion understanding and empathy. Autism Res 2017, 10: 298-310. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  12. The face of pain--a pilot study to validate the measurement of facial pain expression with an improved electromyogram method.

    Science.gov (United States)

    Wolf, Karsten; Raedler, Thomas; Henke, Kai; Kiefer, Falk; Mass, Reinhard; Quante, Markus; Wiedemann, Klaus

    2005-01-01

    The purpose of this pilot study was to establish the validity of an improved facial electromyogram (EMG) method for the measurement of facial pain expression. Darwin defined pain in connection with fear as a simultaneous occurrence of eye staring, brow contraction and teeth chattering. Prkachin was the first to use the video-based Facial Action Coding System to measure facial expressions while using four different types of pain triggers, identifying a group of facial muscles around the eyes. The activity of nine facial muscles in 10 healthy male subjects was analyzed. Pain was induced through a laser system with a randomized sequence of different intensities. Muscle activity was measured with a new, highly sensitive and selective facial EMG. The results indicate two groups of muscles as key for pain expression. These results are in concordance with Darwin's definition. As in Prkachin's findings, one muscle group is assembled around the orbicularis oculi muscle, initiating eye staring. The second group consists of the mentalis and depressor anguli oris muscles, which trigger mouth movements. The results demonstrate the validity of the facial EMG method for measuring facial pain expression. Further studies with psychometric measurements, a larger sample size and a female test group should be conducted.

  13. Detecting movement patterns using Brownian bridges

    NARCIS (Netherlands)

    Buchin, K.; Sijben, S.; Arseneau, T.J.-M.; Willems, E.P.

    2012-01-01

    In trajectory data a low sampling rate leads to high uncertainty in between sampling points, which needs to be taken into account in the analysis of such data. However, current algorithms for movement analysis ignore this uncertainty and assume linear movement between sample points. In this paper we

  14. Facial trauma.

    Science.gov (United States)

    Peeters, N; Lemkens, P; Leach, R; Gemels B; Schepers, S; Lemmens, W

    Facial trauma. Patients with facial trauma must be assessed in a systematic way so as to avoid missing any injury. Severe and disfiguring facial injuries can be distracting. However, clinicians must first focus on the basics of trauma care, following the Advanced Trauma Life Support (ATLS) system of care. Maxillofacial trauma occurs in a significant number of severely injured patients. Life- and sight-threatening injuries must be excluded during the primary and secondary surveys. Special attention must be paid to sight-threatening injuries in stabilized patients through early referral to an appropriate specialist or the early initiation of emergency care treatment. The gold standard for the radiographic evaluation of facial injuries is computed tomography (CT) imaging. Nasal fractures are the most frequent isolated facial fractures. Isolated nasal fractures are principally diagnosed through history and clinical examination. Closed reduction is the most frequently performed treatment for isolated nasal fractures, with a fractured nasal septum as a predictor of failure. Ear, nose and throat surgeons, maxillofacial surgeons and ophthalmologists must all develop an adequate treatment plan for patients with complex maxillofacial trauma.

  15. PERIPHERAL FACIAL PALSY IN CHILDHOOD - LYME BORRELIOSIS TO BE SUSPECTED UNLESS PROVEN OTHERWISE

    NARCIS (Netherlands)

    CHRISTEN, HJ; BARTLAU, N; HANEFELD, F; EIFFERT, H; THOMSSEN, R

    1990-01-01

    27 consecutive cases with acute peripheral facial palsy were studied for Lyme borreliosis. In 16 out of 27 children Lyme borreliosis could be diagnosed by detection of specific IgM antibodies in CSF. CSF findings allow a clear distinction according to etiology. All children with facial palsy due to

  16. Does Gaze Direction Modulate Facial Expression Processing in Children with Autism Spectrum Disorder?

    Science.gov (United States)

    Akechi, Hironori; Senju, Atsushi; Kikuchi, Yukiko; Tojo, Yoshikuni; Osanai, Hiroo; Hasegawa, Toshikazu

    2009-01-01

    Two experiments investigated whether children with autism spectrum disorder (ASD) integrate relevant communicative signals, such as gaze direction, when decoding a facial expression. In Experiment 1, typically developing children (9-14 years old; n = 14) were faster at detecting a facial expression accompanying a gaze direction with a congruent…

  17. An analysis of facial nerve function in irradiated and unirradiated facial nerve grafts

    International Nuclear Information System (INIS)

    Brown, Paul D.; Eshleman, Jeffrey S.; Foote, Robert L.; Strome, Scott E.

    2000-01-01

    Purpose: The effect of high-dose radiation therapy on facial nerve grafts is controversial. Some authors believe radiotherapy is so detrimental to the outcome of facial nerve graft function that dynamic or static slings should be performed instead of facial nerve grafts in all patients who are to receive postoperative radiation therapy. Unfortunately, the facial function achieved with dynamic and static slings is almost always inferior to that after facial nerve grafts. In this retrospective study, we compared facial nerve function in irradiated and unirradiated nerve grafts. Methods and Materials: The medical records of 818 patients with neoplasms involving the parotid gland who received treatment between 1974 and 1997 were reviewed, of whom 66 underwent facial nerve grafting. Fourteen patients who died or had a recurrence less than a year after their facial nerve graft were excluded. The median follow-up for the remaining 52 patients was 10.6 years. Cable nerve grafts were performed in 50 patients and direct anastomoses of the facial nerve in two. Facial nerve function was scored by means of the House-Brackmann (H-B) facial grading system. Twenty-eight of the 52 patients received postoperative radiotherapy. The median time from nerve grafting to start of radiotherapy was 5.1 weeks. The median and mean doses of radiation were 6000 and 6033 cGy, respectively, for the irradiated grafts. One patient received preoperative radiotherapy to a total dose of 5000 cGy in 25 fractions and underwent surgery 1 month after the completion of radiotherapy. This patient was placed, by convention, in the irradiated facial nerve graft cohort. Results: Potential prognostic factors for facial nerve function such as age, gender, extent of surgery at the time of nerve grafting, preoperative facial nerve palsy, duration of preoperative palsy if present, or number of previous operations in the parotid bed were relatively well balanced between irradiated and unirradiated patients. However

  18. Automatic temporal segment detection via bilateral long short-term memory recurrent neural networks

    Science.gov (United States)

    Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun; Li, Liandong

    2017-03-01

    Constrained by the physiology, the temporal factors associated with human behavior, irrespective of facial movement or body gesture, are described by four phases: neutral, onset, apex, and offset. Although they may benefit related recognition tasks, it is not easy to accurately detect such temporal segments. An automatic temporal segment detection framework using bilateral long short-term memory recurrent neural networks (BLSTM-RNN) to learn high-level temporal-spatial features, which synthesizes the local and global temporal-spatial information more efficiently, is presented. The framework is evaluated in detail over the face and body database (FABO). The comparison shows that the proposed framework outperforms state-of-the-art methods for solving the problem of temporal segment detection.

  19. Temporal neural mechanisms underlying conscious access to different levels of facial stimulus contents.

    Science.gov (United States)

    Hsu, Shen-Mou; Yang, Yu-Fang

    2018-04-01

    An important issue facing the empirical study of consciousness concerns how the contents of incoming stimuli gain access to conscious processing. According to classic theories, facial stimuli are processed in a hierarchical manner. However, it remains unclear how the brain determines which level of stimulus content is consciously accessible when facing an incoming facial stimulus. Accordingly, with a magnetoencephalography technique, this study aims to investigate the temporal dynamics of the neural mechanism mediating which level of stimulus content is consciously accessible. Participants were instructed to view masked target faces at threshold so that, according to behavioral responses, their perceptual awareness alternated from consciously accessing facial identity in some trials to being able to consciously access facial configuration features but not facial identity in other trials. Conscious access at these two levels of facial contents were associated with a series of differential neural events. Before target presentation, different patterns of phase angle adjustment were observed between the two types of conscious access. This effect was followed by stronger phase clustering for awareness of facial identity immediately during stimulus presentation. After target onset, conscious access to facial identity, as opposed to facial configural features, was able to elicit more robust late positivity. In conclusion, we suggest that the stages of neural events, ranging from prestimulus to stimulus-related activities, may operate in combination to determine which level of stimulus contents is consciously accessed. Conscious access may thus be better construed as comprising various forms that depend on the level of stimulus contents accessed. NEW & NOTEWORTHY The present study investigates how the brain determines which level of stimulus contents is consciously accessible when facing an incoming facial stimulus. Using magnetoencephalography, we show that prestimulus

  20. Capturing Physiology of Emotion along Facial Muscles: A Method of Distinguishing Feigned from Involuntary Expressions

    Science.gov (United States)

    Khan, Masood Mehmood; Ward, Robert D.; Ingleby, Michael

    The ability to distinguish feigned from involuntary expressions of emotions could help in the investigation and treatment of neuropsychiatric and affective disorders and in the detection of malingering. This work investigates differences in emotion-specific patterns of thermal variations along the major facial muscles. Using experimental data extracted from 156 images, we attempted to classify patterns of emotion-specific thermal variations into neutral, and voluntary and involuntary expressions of positive and negative emotive states. Initial results suggest (i) each facial muscle exhibits a unique thermal response to various emotive states; (ii) the pattern of thermal variances along the facial muscles may assist in classifying voluntary and involuntary facial expressions; and (iii) facial skin temperature measurements along the major facial muscles may be used in automated emotion assessment.

  1. Orthodontic camouflage via total arch movement in a Class II with idiopathic condylar resorption

    Directory of Open Access Journals (Sweden)

    Ji-Sung Jang

    2014-01-01

    Full Text Available Idiopathic condylar resorption (ICR, also known as idiopathic condylysis or condylar atrophy, is multifactorial pathology leading to severe mandibular retrognathism. The etiology has been shown to be multifactorial, such as avascular necrosis, traumatic injuries, hormone and autoimmune disease and it is largely difficult to distinguish the exact cause in each individual. In spite of the remarkable morphological alteration, surgical intervention is not readily recruited due to the possibility of recurrence of resorption. In order to restore balanced facial profile and occlusion. In this report, we present a camouflage treatment for skeletal Class II with ICR and facial asymmetry involving total arch movement, for the improvement facial profile and reconstruction of occlusion.

  2. The Child Affective Facial Expression (CAFE) set: validity and reliability from untrained adults.

    Science.gov (United States)

    LoBue, Vanessa; Thrasher, Cat

    2014-01-01

    Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development-The Child Affective Facial Expression set (CAFE). The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for six emotional facial expressions-angry, fearful, sad, happy, surprised, and disgusted-and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  3. Fisioterapia na paralisia facial periférica: estudo retrospectivo Physical therapy in peripheral facial paralysis: retrospective study

    Directory of Open Access Journals (Sweden)

    Márcia Regina Garanhani

    2007-02-01

    Full Text Available A paralisia facial periférica requer tratamento especializado. A fisioterapia tem como objetivo restabelecer a mímica facial. O objetivo deste estudo foi descrever e analisar os resultados da fisioterapia para indivíduos com paralisia facial periférica. FORMA DE ESTUDO: Retrospectivo. MÉTODO: Foi realizado um estudo retrospectivo em um Hospital Universitário, com autorização do Serviço de Atendimento Médico e Estatístico, no período de 1999 a 2003. Os dados são apresentados em forma descritiva, com utilização de média e mediana para variáveis numéricas e freqüência para variáveis categóricas. RESULTADOS: Foram analisados 23 prontuários durante quatro anos. Foi identificado o predomínio do sexo feminino e a média de idade foi de 32,3 anos (DP±16,5; 14 casos idiopáticas e cinco traumáticas; 12, com comprometimento motor total e 11, parcial; nos 12 casos com avaliação final, sete evoluíram para recuperação parcial e cinco para total. A fisioterapia utilizada foi cinesioterapia e orientações. CONCLUSÃO: Neste estudo os indivíduos são similares a outras populações. Foram tratados com cinesioterapia, como sugerido pela literatura científica e evoluíram com recuperação.Peripheral facial paralysis requires specialized treatment. Physical therapy aims at reestablishing facial movements. The aim of this study was to describe and to analyze physical therapy results for individuals with peripheral facial paralysis. STUDY DESIGN: Retrospective study. METHOD: A retrospective study was carried out at the University Hospital, authorized by the Statistics and Medical File Services, from 1999 to 2003. Data are presented in descriptive form with mean and median values for numeric variables and frequency for categorical variables. RESULTS: Twenty-three files were analyzed during four years. Females predominated and the average age was of 32.3 years (SD±16.5; 14 idiopathic and five trauma cases; 12 with total motor

  4. The fate of facial asymmetry after surgery for "muscular torticollis" in early childhood

    Directory of Open Access Journals (Sweden)

    Dinesh Kittur

    2016-01-01

    Full Text Available Aims and Objectives: To study wheather the facial features return to normal after surgery for muscular torticollis done in early childhood. Materials and Methods: This is a long-term study of the fate of facial asymmetry in four children who have undergone operation for muscular torticollis in early childhood. All the patients presented late, i.e., after the age of 4 years with a scarred sternomastoid and plagiocephaly, so conservative management with physiotherapy was not considered. All the patients had an x-ray of cervical spine and eye and dental checkup before making a diagnosis of muscular torticollis. Preoperative photograph of the patient′s face was taken to counsel the parents about the secondary effect of short sternomastoid on facial features and the need for surgery. After division of sternomastoid muscle and release of cervical fascia when indicated, the head was maintained in a hyperextended position supported by sand bags for three days. Gradual physiotherapy was then started followed by wearing of a Minerva collar that the child wore for a maximum period of time in 24 h. Physiotherapy was continued three times a day till the range of movements of the head returned to normal. During the follow-up, serial photographs were taken to note the changes in the facial features. Results: In all four patients, the asymmetry of the face got corrected and the facial features returned to normal. Conclusion: Most of the deformity of facial asymmetry gets corrected in the first two years after surgery. By adolescence, the face returns to normal.

  5. Movement detection impaired in patients with knee osteoarthritis compared to healthy controls

    DEFF Research Database (Denmark)

    Lund, H; Juul-Kristensen, Birgit; Hansen, Klaus

    2009-01-01

    The purpose of this study was to clarify whether osteoarthritis (OA) patients have a localized or a generalized reduction in proprioception. Twenty one women with knee OA (mean age [SD]: 57.1 [12.0] years) and 29 healthy women (mean age [SD]: 55.3 [10.1] years) had their joint position sense (JPS......) and threshold to detection of a passive movement (TDPM) measured in both knees and elbows. JPS was measured as the participant's ability to actively reproduce the position of the elbow and knee joints. TDPM was measured as the participant's ability to recognize a passive motion of the elbow and knee joints....... The absolute error (AE) for JPS (i.e., absolute difference in degrees between target and estimated position) and for TDPM (i.e., the difference in degrees at movement start and response when recognizing the movement) was calculated. For TDPM a higher AE (mean [SE]) was found in the involved knees in patients...

  6. Automated facial recognition of manually generated clay facial approximations: Potential application in unidentified persons data repositories.

    Science.gov (United States)

    Parks, Connie L; Monson, Keith L

    2018-01-01

    This research examined how accurately 2D images (i.e., photographs) of 3D clay facial approximations were matched to corresponding photographs of the approximated individuals using an objective automated facial recognition system. Irrespective of search filter (i.e., blind, sex, or ancestry) or rank class (R 1 , R 10 , R 25 , and R 50 ) employed, few operationally informative results were observed. In only a single instance of 48 potential match opportunities was a clay approximation matched to a corresponding life photograph within the top 50 images (R 50 ) of a candidate list, even with relatively small gallery sizes created from the application of search filters (e.g., sex or ancestry search restrictions). Increasing the candidate lists to include the top 100 images (R 100 ) resulted in only two additional instances of correct match. Although other untested variables (e.g., approximation method, 2D photographic process, and practitioner skill level) may have impacted the observed results, this study suggests that 2D images of manually generated clay approximations are not readily matched to life photos by automated facial recognition systems. Further investigation is necessary in order to identify the underlying cause(s), if any, of the poor recognition results observed in this study (e.g., potential inferior facial feature detection and extraction). Additional inquiry exploring prospective remedial measures (e.g., stronger feature differentiation) is also warranted, particularly given the prominent use of clay approximations in unidentified persons casework. Copyright © 2017. Published by Elsevier B.V.

  7. Detection of mental imagery and attempted movements in patients with disorders of consciousness using EEG

    Directory of Open Access Journals (Sweden)

    Petar eHorki

    2014-12-01

    Full Text Available Further development of an EEG based communication device for patients with disorders of consciousness (DoC could benefit from addressing the following gaps in knowledge – first, an evaluation of different types of motor imagery; second, an evaluation of passive feet movement as a mean of an initial classifier setup; and third, rapid delivery of biased feedback. To that end we investigated whether complex and / or familiar mental imagery, passive, and attempted feet movement can be reliably detected in patients with DoC using EEG recordings, aiming to provide them with a means of communication. Six patients in a minimally conscious state (MCS took part in this study. The patients were verbally instructed to perform different mental imagery tasks (sport, navigation, as well as attempted feet movements, to induce distinctive event-related (desynchronization (ERD/S patterns in the EEG. Offline classification accuracies above chance level were reached in all three tasks (i.e. attempted feet, sport, and navigation, with motor tasks yielding significant (p<0.05 results more often than navigation (sport: 10 out of 18 sessions; attempted feet: 7 out of 14 sessions; navigation: 4 out of 12 sessions. The passive feet movements, evaluated in one patient, yielded mixed results: whereas time-frequency analysis revealed task-related EEG changes over neurophysiological plausible cortical areas, the classification results were not significant enough (p<0.05 to setup an initial classifier for the detection of attempted movements. Concluding, the results presented in this study are consistent with the current state of the art in similar studies, to which we contributed by comparing different types of mental tasks, notably complex motor imagery and attempted feet movements, within patients. Furthermore, we explored new venues, such as an evaluation of passive feet movement as a mean of an initial classifier setup, and rapid delivery of biased feedback.

  8. Facial anatomy.

    Science.gov (United States)

    Marur, Tania; Tuna, Yakup; Demirci, Selman

    2014-01-01

    Dermatologic problems of the face affect both function and aesthetics, which are based on complex anatomical features. Treating dermatologic problems while preserving the aesthetics and functions of the face requires knowledge of normal anatomy. When performing successfully invasive procedures of the face, it is essential to understand its underlying topographic anatomy. This chapter presents the anatomy of the facial musculature and neurovascular structures in a systematic way with some clinically important aspects. We describe the attachments of the mimetic and masticatory muscles and emphasize their functions and nerve supply. We highlight clinically relevant facial topographic anatomy by explaining the course and location of the sensory and motor nerves of the face and facial vasculature with their relations. Additionally, this chapter reviews the recent nomenclature of the branching pattern of the facial artery. © 2013 Elsevier Inc. All rights reserved.

  9. Women living with facial hair: the psychological and behavioral burden.

    Science.gov (United States)

    Lipton, Michelle G; Sherr, Lorraine; Elford, Jonathan; Rustin, Malcolm H A; Clayton, William J

    2006-08-01

    While unwanted facial hair is clearly distressing for women, relatively little is known about its psychological impact. This study reports on the psychological and behavioral burden of facial hair in women with suspected polycystic ovary syndrome. Eighty-eight women (90% participation rate) completed a self-administered questionnaire concerning hair removal practices; the impact of facial hair on social and emotional domains; relationships and daily life; anxiety and depression (Hospital Anxiety and Depression Scale); self-esteem (Rosenberg Self-esteem Scale); and quality of life (WHOQOL-BREF). Women spent considerable time on the management of their facial hair (mean, 104 min/week). Two thirds (67%) reported continually checking in mirrors and 76% by touch. Forty percent felt uncomfortable in social situations. High levels of emotional distress and psychological morbidity were detected; 30% had levels of depression above the clinical cut off point, while 75% reported clinical levels of anxiety; 29% reported both. Although overall quality of life was good, scores were low in social and relationship domains--reflecting the impact of unwanted facial hair. Unwanted facial hair carries a high psychological burden for women and represents a significant intrusion into their daily lives. Psychological support is a neglected element of care for these women.

  10. Image Based Hair Segmentation Algorithm for the Application of Automatic Facial Caricature Synthesis

    Directory of Open Access Journals (Sweden)

    Yehu Shen

    2014-01-01

    Full Text Available Hair is a salient feature in human face region and are one of the important cues for face analysis. Accurate detection and presentation of hair region is one of the key components for automatic synthesis of human facial caricature. In this paper, an automatic hair detection algorithm for the application of automatic synthesis of facial caricature based on a single image is proposed. Firstly, hair regions in training images are labeled manually and then the hair position prior distributions and hair color likelihood distribution function are estimated from these labels efficiently. Secondly, the energy function of the test image is constructed according to the estimated prior distributions of hair location and hair color likelihood. This energy function is further optimized according to graph cuts technique and initial hair region is obtained. Finally, K-means algorithm and image postprocessing techniques are applied to the initial hair region so that the final hair region can be segmented precisely. Experimental results show that the average processing time for each image is about 280 ms and the average hair region detection accuracy is above 90%. The proposed algorithm is applied to a facial caricature synthesis system. Experiments proved that with our proposed hair segmentation algorithm the facial caricatures are vivid and satisfying.

  11. [Application of a mathematical algorithm for the detection of electroneuromyographic results in the pathogenesis study of facial dyskinesia].

    Science.gov (United States)

    Gribova, N P; Iudel'son, Ia B; Golubev, V L; Abramenkova, I V

    2003-01-01

    To carry out a differential diagnosis of two facial dyskinesia (FD) models--facial hemispasm (FH) and facial paraspasm (FP), a combined program of electroneuromyographic (ENMG) examination has been created, using statistical analyses, including that for objects identification based on hybrid neural network with the application of adaptive fuzzy logic method and standard statistics programs (Wilcoxon, Student statistics). In FH, a lesion of peripheral facial neuromotor apparatus with augmentation of functions of inter-neurons in segmental and upper segmental stem levels predominated. In FP, primary afferent strengthening in mimic muscles was accompanied by increased motor neurons activity and reciprocal augmentation of inter-neurons, inhibiting motor portion of V pair. Mathematical algorithm for ENMG results recognition worked out in the study provides a precise differentiation of two FD models and opens possibilities for differential diagnosis of other facial motor disorders.

  12. Diplegia facial traumatica Traumatic facial diplegia: a case report

    Directory of Open Access Journals (Sweden)

    J. Fortes-Rego

    1975-12-01

    Full Text Available É relatado um caso de paralisia facial bilateral, incompleta, associada a hipoacusia esquerda, após traumatismo cranioencefálico, com fraturas evidenciadas radiológicamente. Algumas considerações são formuladas tentando relacionar ditas manifestações com fraturas do osso temporal.A case of traumatic facial diplegia with left partial loss of hearing following head injury is reported. X-rays showed fractures on the occipital and left temporal bones. A review of traumatic facial paralysis is made.

  13. [Screening for psychiatric risk factors in a facial trauma patients. Validating a questionnaire].

    Science.gov (United States)

    Foletti, J M; Bruneau, S; Farisse, J; Thiery, G; Chossegros, C; Guyot, L

    2014-12-01

    We recorded similarities between patients managed in the psychiatry department and in the maxillo-facial surgical unit. Our hypothesis was that some psychiatric conditions act as risk factors for facial trauma. We had for aim to test our hypothesis and to validate a simple and efficient questionnaire to identify these psychiatric disorders. Fifty-eight consenting patients with facial trauma, recruited prospectively in the 3 maxillo-facial surgery departments of the Marseille area during 3 months (December 2012-March 2013) completed a self-questionnaire based on the French version of 3 validated screening tests (Self Reported Psychopathy test, Rapid Alcohol Problem Screening test quantity-frequency, and Personal Health Questionnaire). This preliminary study confirmed that psychiatric conditions detected by our questionnaire, namely alcohol abuse and dependence, substance abuse, and depression, were risk factors for facial trauma. Maxillo-facial surgeons are often unaware of psychiatric disorders that may be the cause of facial trauma. The self-screening test we propose allows documenting the psychiatric history of patients and implementing earlier psychiatric care. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  14. Bell's palsy and choreiform movements during peginterferon α and ribavirin therapy

    Institute of Scientific and Technical Information of China (English)

    Sener Barut; Hatice Karaer; Erol Oksuz; Asl Gündodu Eken; Ayse Nazl Basak

    2009-01-01

    Neuropsychiatric side effects of long-term recombinant interferon-α therapy consist of a large spectrum of symptoms. In the literature, cranial neuropathy, especially Bell's palsy, and movement disorders, have been reported much less often than other neurotoxic effects. We report a case of Bell's palsy in a patient with chronic hepatitis C during peginterferon-α and ribavirin therapy. The patient subsequently developed clinically inapparent facial nerve involvement on the contralateral side and showed an increase in choreic movements related to Huntington's disease during treatment.

  15. [Peripheral facial nerve lesion induced long-term dendritic retraction in pyramidal cortico-facial neurons].

    Science.gov (United States)

    Urrego, Diana; Múnera, Alejandro; Troncoso, Julieta

    2011-01-01

    Little evidence is available concerning the morphological modifications of motor cortex neurons associated with peripheral nerve injuries, and the consequences of those injuries on post lesion functional recovery. Dendritic branching of cortico-facial neurons was characterized with respect to the effects of irreversible facial nerve injury. Twenty-four adult male rats were distributed into four groups: sham (no lesion surgery), and dendritic assessment at 1, 3 and 5 weeks post surgery. Eighteen lesion animals underwent surgical transection of the mandibular and buccal branches of the facial nerve. Dendritic branching was examined by contralateral primary motor cortex slices stained with the Golgi-Cox technique. Layer V pyramidal (cortico-facial) neurons from sham and injured animals were reconstructed and their dendritic branching was compared using Sholl analysis. Animals with facial nerve lesions displayed persistent vibrissal paralysis throughout the five week observation period. Compared with control animal neurons, cortico-facial pyramidal neurons of surgically injured animals displayed shrinkage of their dendritic branches at statistically significant levels. This shrinkage persisted for at least five weeks after facial nerve injury. Irreversible facial motoneuron axonal damage induced persistent dendritic arborization shrinkage in contralateral cortico-facial neurons. This morphological reorganization may be the physiological basis of functional sequelae observed in peripheral facial palsy patients.

  16. Facial Pain Followed by Unilateral Facial Nerve Palsy: A Case Report with Literature Review

    OpenAIRE

    GV, Sowmya; BS, Manjunatha; Goel, Saurabh; Singh, Mohit Pal; Astekar, Madhusudan

    2014-01-01

    Peripheral facial nerve palsy is the commonest cranial nerve motor neuropathy. The causes range from cerebrovascular accident to iatrogenic damage, but there are few reports of facial nerve paralysis attributable to odontogenic infections. In majority of the cases, recovery of facial muscle function begins within first three weeks after onset. This article reports a unique case of 32-year-old male patient who developed facial pain followed by unilateral facial nerve paralysis due to odontogen...

  17. The Influence of Facial Signals on the Automatic Imitation of Hand Actions.

    Science.gov (United States)

    Butler, Emily E; Ward, Robert; Ramsey, Richard

    2016-01-01

    Imitation and facial signals are fundamental social cues that guide interactions with others, but little is known regarding the relationship between these behaviors. It is clear that during expression detection, we imitate observed expressions by engaging similar facial muscles. It is proposed that a cognitive system, which matches observed and performed actions, controls imitation and contributes to emotion understanding. However, there is little known regarding the consequences of recognizing affective states for other forms of imitation, which are not inherently tied to the observed emotion. The current study investigated the hypothesis that facial cue valence would modulate automatic imitation of hand actions. To test this hypothesis, we paired different types of facial cue with an automatic imitation task. Experiments 1 and 2 demonstrated that a smile prompted greater automatic imitation than angry and neutral expressions. Additionally, a meta-analysis of this and previous studies suggests that both happy and angry expressions increase imitation compared to neutral expressions. By contrast, Experiments 3 and 4 demonstrated that invariant facial cues, which signal trait-levels of agreeableness, had no impact on imitation. Despite readily identifying trait-based facial signals, levels of agreeableness did not differentially modulate automatic imitation. Further, a Bayesian analysis showed that the null effect was between 2 and 5 times more likely than the experimental effect. Therefore, we show that imitation systems are more sensitive to prosocial facial signals that indicate "in the moment" states than enduring traits. These data support the view that a smile primes multiple forms of imitation including the copying actions that are not inherently affective. The influence of expression detection on wider forms of imitation may contribute to facilitating interactions between individuals, such as building rapport and affiliation.

  18. Facial infiltrative lipomatosis

    International Nuclear Information System (INIS)

    Haloi, A.K.; Ditchfield, M.; Pennington, A.; Philips, R.

    2006-01-01

    Although there are multiple case reports and small series concerning facial infiltrative lipomatosis, there is no composite radiological description of the condition. Radiological evaluation of facial infiltrative lipomatosis using plain film, sonography, CT and MRI. We radiologically evaluated four patients with facial infiltrative lipomatosis. Initial plain radiographs of the face were acquired in all patients. Three children had an initial sonographic examination to evaluate the condition, followed by MRI. One child had a CT and then MRI. One child had abnormalities on plain radiographs. Sonographically, the lesions were seen as ill-defined heterogeneously hypoechoic areas with indistinct margins. On CT images, the lesions did not have a homogeneous fat density but showed some relatively more dense areas in deeper parts of the lesions. MRI provided better delineation of the exact extent of the process and characterization of facial infiltrative lipomatosis. Facial infiltrative lipomatosis should be considered as a differential diagnosis of vascular or lymphatic malformation when a child presents with unilateral facial swelling. MRI is the most useful single imaging modality to evaluate the condition, as it provides the best delineation of the exact extent of the process. (orig.)

  19. The Child Affective Facial Expression (CAFE Set: Validity and Reliability from Untrained Adults

    Directory of Open Access Journals (Sweden)

    Vanessa eLoBue

    2015-01-01

    Full Text Available Emotional development is one of the largest and most productive areas of psychological research. For decades, researchers have been fascinated by how humans respond to, detect, and interpret emotional facial expressions. Much of the research in this area has relied on controlled stimulus sets of adults posing various facial expressions. Here we introduce a new stimulus set of emotional facial expressions into the domain of research on emotional development—The Child Affective Facial Expression set (CAFE. The CAFE set features photographs of a racially and ethnically diverse group of 2- to 8-year-old children posing for 6 emotional facial expressions—angry, fearful, sad, happy, surprised, and disgusted—and a neutral face. In the current work, we describe the set and report validity and reliability data on the set from 100 untrained adult participants.

  20. Human movement onset detection from isometric force and torque measurements: a supervised pattern recognition approach.

    Science.gov (United States)

    Soda, Paolo; Mazzoleni, Stefano; Cavallo, Giuseppe; Guglielmelli, Eugenio; Iannello, Giulio

    2010-09-01

    Recent research has successfully introduced the application of robotics and mechatronics to functional assessment and motor therapy. Measurements of movement initiation in isometric conditions are widely used in clinical rehabilitation and their importance in functional assessment has been demonstrated for specific parts of the human body. The determination of the voluntary movement initiation time, also referred to as onset time, represents a challenging issue since the time window characterizing the movement onset is of particular relevance for the understanding of recovery mechanisms after a neurological damage. Establishing it manually as well as a troublesome task may also introduce oversight errors and loss of information. The most commonly used methods for automatic onset time detection compare the raw signal, or some extracted measures such as its derivatives (i.e., velocity and acceleration) with a chosen threshold. However, they suffer from high variability and systematic errors because of the weakness of the signal, the abnormality of response profiles as well as the variability of movement initiation times among patients. In this paper, we introduce a technique to optimise onset detection according to each input signal. It is based on a classification system that enables us to establish which deterministic method provides the most accurate onset time on the basis of information directly derived from the raw signal. The approach was tested on annotated force and torque datasets. Each dataset is constituted by 768 signals acquired from eight anatomical districts in 96 patients who carried out six tasks related to common daily activities. The results show that the proposed technique improves not only on the performance achieved by each of the deterministic methods, but also on that attained by a group of clinical experts. The paper describes a classification system detecting the voluntary movement initiation time and adaptable to different signals. By

  1. Chondromyxoid fibroma of the mastoid facial nerve canal mimicking a facial nerve schwannoma.

    Science.gov (United States)

    Thompson, Andrew L; Bharatha, Aditya; Aviv, Richard I; Nedzelski, Julian; Chen, Joseph; Bilbao, Juan M; Wong, John; Saad, Reda; Symons, Sean P

    2009-07-01

    Chondromyxoid fibroma of the skull base is a rare entity. Involvement of the temporal bone is particularly rare. We present an unusual case of progressive facial nerve paralysis with imaging and clinical findings most suggestive of a facial nerve schwannoma. The lesion was tubular in appearance, expanded the mastoid facial nerve canal, protruded out of the stylomastoid foramen, and enhanced homogeneously. The only unusual imaging feature was minor calcification within the tumor. Surgery revealed an irregular, cystic lesion. Pathology diagnosed a chondromyxoid fibroma involving the mastoid portion of the facial nerve canal, destroying the facial nerve.

  2. Intuitive Face Judgments Rely on Holistic Eye Movement Pattern.

    Science.gov (United States)

    Mega, Laura F; Volz, Kirsten G

    2017-01-01

    Non-verbal signals such as facial expressions are of paramount importance for social encounters. Their perception predominantly occurs without conscious awareness and is effortlessly integrated into social interactions. In other words, face perception is intuitive. Contrary to classical intuition tasks, this work investigates intuitive processes in the realm of every-day type social judgments. Two differently instructed groups of participants judged the authenticity of emotional facial expressions, while their eye movements were recorded: an 'intuitive group,' instructed to rely on their "gut feeling" for the authenticity judgments, and a 'deliberative group,' instructed to make their judgments after careful analysis of the face. Pixel-wise statistical maps of the resulting eye movements revealed a differential viewing pattern, wherein the intuitive judgments relied on fewer, longer and more centrally located fixations. These markers have been associated with a global/holistic viewing strategy. The holistic pattern of intuitive face judgments is in line with evidence showing that intuition is related to processing the "gestalt" of an object, rather than focusing on details. Our work thereby provides further evidence that intuitive processes are characterized by holistic perception, in an understudied and real world domain of intuition research.

  3. Discrete vs. Continuous Mapping of Facial Electromyography for Human-Machine-Interface Control: Performance and Training Effects

    Science.gov (United States)

    Cler, Meredith J.; Stepp, Cara E.

    2015-01-01

    Individuals with high spinal cord injuries are unable to operate a keyboard and mouse with their hands. In this experiment, we compared two systems using surface electromyography (sEMG) recorded from facial muscles to control an onscreen keyboard to type five-letter words. Both systems used five sEMG sensors to capture muscle activity during five distinct facial gestures that were mapped to five cursor commands: move left, move right, move up, move down, and “click”. One system used a discrete movement and feedback algorithm in which the user produced one quick facial gesture, causing a corresponding discrete movement to an adjacent letter. The other system was continuously updated and allowed the user to control the cursor’s velocity by relative activation between different sEMG channels. Participants were trained on one system for four sessions on consecutive days, followed by one crossover session on the untrained system. Information transfer rates (ITRs) were high for both systems compared to other potential input modalities, both initially and with training (Session 1: 62.1 bits/min, Session 4: 105.1 bits/min). Users of the continuous system showed significantly higher ITRs than the discrete users. Future development will focus on improvements to both systems, which may offer differential advantages for users with various motor impairments. PMID:25616053

  4. Development of the Korean Facial Emotion Stimuli: Korea University Facial Expression Collection 2nd Edition

    Directory of Open Access Journals (Sweden)

    Sun-Min Kim

    2017-05-01

    Full Text Available Background: Developing valid emotional facial stimuli for specific ethnicities creates ample opportunities to investigate both the nature of emotional facial information processing in general and clinical populations as well as the underlying mechanisms of facial emotion processing within and across cultures. Given that most entries in emotional facial stimuli databases were developed with western samples, and given that very few of the eastern emotional facial stimuli sets were based strictly on the Ekman’s Facial Action Coding System, developing valid emotional facial stimuli of eastern samples remains a high priority.Aims: To develop and examine the psychometric properties of six basic emotional facial stimuli recruiting professional Korean actors and actresses based on the Ekman’s Facial Action Coding System for the Korea University Facial Expression Collection-Second Edition (KUFEC-II.Materials And Methods: Stimulus selection was done in two phases. First, researchers evaluated the clarity and intensity of each stimulus developed based on the Facial Action Coding System. Second, researchers selected a total of 399 stimuli from a total of 57 actors and actresses, which were then rated on accuracy, intensity, valence, and arousal by 75 independent raters.Conclusion: The hit rates between the targeted and rated expressions of the KUFEC-II were all above 80%, except for fear (50% and disgust (63%. The KUFEC-II appears to be a valid emotional facial stimuli database, providing the largest set of emotional facial stimuli. The mean intensity score was 5.63 (out of 7, suggesting that the stimuli delivered the targeted emotions with great intensity. All positive expressions were rated as having a high positive valence, whereas all negative expressions were rated as having a high negative valence. The KUFEC II is expected to be widely used in various psychological studies on emotional facial expression. KUFEC-II stimuli can be obtained through

  5. Caricaturing facial expressions.

    Science.gov (United States)

    Calder, A J; Rowland, D; Young, A W; Nimmo-Smith, I; Keane, J; Perrett, D I

    2000-08-14

    The physical differences between facial expressions (e.g. fear) and a reference norm (e.g. a neutral expression) were altered to produce photographic-quality caricatures. In Experiment 1, participants rated caricatures of fear, happiness and sadness for their intensity of these three emotions; a second group of participants rated how 'face-like' the caricatures appeared. With increasing levels of exaggeration the caricatures were rated as more emotionally intense, but less 'face-like'. Experiment 2 demonstrated a similar relationship between emotional intensity and level of caricature for six different facial expressions. Experiments 3 and 4 compared intensity ratings of facial expression caricatures prepared relative to a selection of reference norms - a neutral expression, an average expression, or a different facial expression (e.g. anger caricatured relative to fear). Each norm produced a linear relationship between caricature and rated intensity of emotion; this finding is inconsistent with two-dimensional models of the perceptual representation of facial expression. An exemplar-based multidimensional model is proposed as an alternative account.

  6. Parotidectomía y vena facial Parotidectomy and facial vein

    Directory of Open Access Journals (Sweden)

    F. Hernández Altemir

    2009-10-01

    Full Text Available La cirugía de los tumores benignos de la parótida, es una cirugía de relaciones con estructuras fundamentalmente nerviosas cuyo daño, representa un gravísimo problema psicosomático por definirlo de una manera genérica. Para ayudar al manejo quirúrgico del nervio facial periférico, es por lo que en el presente artículo tratamos de enfatizar la importancia de la vena facial en la disección y conservación del nervio, precisamente donde su disección suele ser más comprometida, esto es en las ramas más caudales. El trabajo que vamos a desarrollar hay que verlo pues, como un ensalzamiento de las estructuras venosas en el seguimiento y control del nervio facial periférico y de porqué no, el nervio auricular mayor no siempre suficientemente valorado en la cirugía de la parótida al perder protagonismo con el facial.Benign parotid tumor surgery is related to fundamental nervous structures, defined simply: that when damaged cause great psychosomatic problems. In order to make peripheral facial nerve surgery easy to handle for the surgeon this article emphasizes the importance of the facial vein in the dissection and conservation of the nerve. Its dissection can be compromised if the caudal branches are damaged. The study that we develop should be seen as praise for the vein structures in the follow up and control of the peripheral facial nerve, and the main auricular nerve that is often undervalued when it is no longer the protagonist in the face.

  7. Effect of neural-induced mesenchymal stem cells and platelet-rich plasma on facial nerve regeneration in an acute nerve injury model.

    Science.gov (United States)

    Cho, Hyong-Ho; Jang, Sujeong; Lee, Sang-Chul; Jeong, Han-Seong; Park, Jong-Seong; Han, Jae-Young; Lee, Kyung-Hwa; Cho, Yong-Bum

    2010-05-01

    The purpose of this study was to investigate the effects of platelet-rich plasma (PRP) and neural-induced human mesenchymal stem cells (nMSCs) on axonal regeneration from a facial nerve axotomy injury in a guinea pig model. Prospective, controlled animal study. Experiments involved the transection and repair of the facial nerve in 24 albino guinea pigs. Four groups were created based on the method of repair: suture only (group I, control group); PRP with suture (group II); nMSCs with suture (group III); and PRP and nMSCs with suture (group IV). Each method of repair was applied immediately after nerve transection. The outcomes measured were: 1) functional outcome measurement (vibrissae and eyelid closure movements); 2) electrophysiologic evaluation; 3) neurotrophic factors assay; and 4) histologic evaluation. With respect to the functional outcome measurement, the functional outcomes improved after transection and reanastomosis in all groups. The control group was the slowest to demonstrate recovery of movement after transection and reanastomosis. The other three groups (groups II, III, and IV) had significant improvement in function compared to the control group 4 weeks after surgery (P facial nerve regeneration in an animal model of facial nerve axotomy. The use of nMSCs showed no benefit over the use of PRP in facial nerve regeneration, but the combined use of PRP and nMSCs showed a greater beneficial effect than use of either alone. This study provides evidence for the potential clinical application of PRP and nMSCs in peripheral nerve regeneration of an acute nerve injury. Laryngoscope, 2010.

  8. Gently does it: Humans outperform a software classifier in recognizing subtle, nonstereotypical facial expressions.

    Science.gov (United States)

    Yitzhak, Neta; Giladi, Nir; Gurevich, Tanya; Messinger, Daniel S; Prince, Emily B; Martin, Katherine; Aviezer, Hillel

    2017-12-01

    According to dominant theories of affect, humans innately and universally express a set of emotions using specific configurations of prototypical facial activity. Accordingly, thousands of studies have tested emotion recognition using sets of highly intense and stereotypical facial expressions, yet their incidence in real life is virtually unknown. In fact, a commonplace experience is that emotions are expressed in subtle and nonprototypical forms. Such facial expressions are at the focus of the current study. In Experiment 1, we present the development and validation of a novel stimulus set consisting of dynamic and subtle emotional facial displays conveyed without constraining expressers to using prototypical configurations. Although these subtle expressions were more challenging to recognize than prototypical dynamic expressions, they were still well recognized by human raters, and perhaps most importantly, they were rated as more ecological and naturalistic than the prototypical expressions. In Experiment 2, we examined the characteristics of subtle versus prototypical expressions by subjecting them to a software classifier, which used prototypical basic emotion criteria. Although the software was highly successful at classifying prototypical expressions, it performed very poorly at classifying the subtle expressions. Further validation was obtained from human expert face coders: Subtle stimuli did not contain many of the key facial movements present in prototypical expressions. Together, these findings suggest that emotions may be successfully conveyed to human viewers using subtle nonprototypical expressions. Although classic prototypical facial expressions are well recognized, they appear less naturalistic and may not capture the richness of everyday emotional communication. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Enhancing facial features by using clear facial features

    Science.gov (United States)

    Rofoo, Fanar Fareed Hanna

    2017-09-01

    The similarity of features between individuals of same ethnicity motivated the idea of this project. The idea of this project is to extract features of clear facial image and impose them on blurred facial image of same ethnic origin as an approach to enhance a blurred facial image. A database of clear images containing 30 individuals equally divided to five different ethnicities which were Arab, African, Chines, European and Indian. Software was built to perform pre-processing on images in order to align the features of clear and blurred images. And the idea was to extract features of clear facial image or template built from clear facial images using wavelet transformation to impose them on blurred image by using reverse wavelet. The results of this approach did not come well as all the features did not align together as in most cases the eyes were aligned but the nose or mouth were not aligned. Then we decided in the next approach to deal with features separately but in the result in some cases a blocky effect was present on features due to not having close matching features. In general the available small database did not help to achieve the goal results, because of the number of available individuals. The color information and features similarity could be more investigated to achieve better results by having larger database as well as improving the process of enhancement by the availability of closer matches in each ethnicity.

  10. Survey of methods of facial palsy documentation in use by members of the Sir Charles Bell Society.

    Science.gov (United States)

    Fattah, Adel Y; Gavilan, Javier; Hadlock, Tessa A; Marcus, Jeffrey R; Marres, Henri; Nduka, Charles; Slattery, William H; Snyder-Warwick, Alison K

    2014-10-01

    Facial palsy manifests a broad array of deficits affecting function, form, and psychological well-being. Assessment scales were introduced to standardize and document the features of facial palsy and to facilitate the exchange of information and comparison of outcomes. The aim of this study was to determine which assessment methodologies are currently employed by those involved in the care of patients with facial palsy as a first step toward the development of consensus on the appropriate assessments for this patient population. Online questionnaire. The Sir Charles Bell Society, a group of professionals dedicated to the care of patients with facial palsy, were surveyed to determine the scales used to document facial nerve function, patient reported outcome measures (PROM), and photographic documentation. Fifty-five percent of the membership responded (n = 83). Grading scales were used by 95%, most commonly the House-Brackmann and Sunnybrook scales. PROMs were used by 58%, typically the Facial Clinimetric Evaluation scale or Facial Disability Index. All used photographic recordings, but variability existed among the facial expressions used. Videography was performed by 82%, and mostly involved the same views as still photography; it was also used to document spontaneous movement and speech. Three-dimensional imaging was employed by 18% of respondents. There exists significant heterogeneity in assessments among clinicians, which impedes straightforward comparisons of outcomes following recovery and intervention. Widespread adoption of structured assessments, including scales, PROMs, photography, and videography, will facilitate communication and comparison among those who study the effects of interventions on this population. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  11. Automatic Processing of Changes in Facial Emotions in Dysphoria: A Magnetoencephalography Study.

    Science.gov (United States)

    Xu, Qianru; Ruohonen, Elisa M; Ye, Chaoxiong; Li, Xueqiao; Kreegipuu, Kairi; Stefanics, Gabor; Luo, Wenbo; Astikainen, Piia

    2018-01-01

    It is not known to what extent the automatic encoding and change detection of peripherally presented facial emotion is altered in dysphoria. The negative bias in automatic face processing in particular has rarely been studied. We used magnetoencephalography (MEG) to record automatic brain responses to happy and sad faces in dysphoric (Beck's Depression Inventory ≥ 13) and control participants. Stimuli were presented in a passive oddball condition, which allowed potential negative bias in dysphoria at different stages of face processing (M100, M170, and M300) and alterations of change detection (visual mismatch negativity, vMMN) to be investigated. The magnetic counterpart of the vMMN was elicited at all stages of face processing, indexing automatic deviance detection in facial emotions. The M170 amplitude was modulated by emotion, response amplitudes being larger for sad faces than happy faces. Group differences were found for the M300, and they were indexed by two different interaction effects. At the left occipital region of interest, the dysphoric group had larger amplitudes for sad than happy deviant faces, reflecting negative bias in deviance detection, which was not found in the control group. On the other hand, the dysphoric group showed no vMMN to changes in facial emotions, while the vMMN was observed in the control group at the right occipital region of interest. Our results indicate that there is a negative bias in automatic visual deviance detection, but also a general change detection deficit in dysphoria.

  12. Facial reanimation by muscle-nerve neurotization after facial nerve sacrifice. Case report.

    Science.gov (United States)

    Taupin, A; Labbé, D; Babin, E; Fromager, G

    2016-12-01

    Recovering a certain degree of mimicry after sacrifice of the facial nerve is a clinically recognized finding. The authors report a case of hemifacial reanimation suggesting a phenomenon of neurotization from muscle-to-nerve. A woman benefited from a parotidectomy with sacrifice of the left facial nerve indicated for recurrent tumor in the gland. The distal branches of the facial nerve, isolated at the time of resection, were buried in the masseter muscle underneath. The patient recovered a voluntary hémifacial motricity. The electromyographic analysis of the motor activity of the zygomaticus major before and after block of the masseter nerve showed a dependence between mimic muscles and the masseter muscle. Several hypotheses have been advanced to explain the spontaneous reanimation of facial paralysis. The clinical case makes it possible to argue in favor of muscle-to-nerve neurotization from masseter muscle to distal branches of the facial nerve. It illustrates the quality of motricity that can be obtained thanks to this procedure. The authors describe a simple implantation technique of distal branches of the facial nerve in the masseter muscle during a radical parotidectomy with facial nerve sacrifice and recovery of resting tone but also a quality voluntary mimicry. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  13. Elevated responses to constant facial emotions in different faces in the human amygdala: an fMRI study of facial identity and expression

    Directory of Open Access Journals (Sweden)

    Weiller Cornelius

    2004-11-01

    Full Text Available Abstract Background Human faces provide important signals in social interactions by inferring two main types of information, individual identity and emotional expression. The ability to readily assess both, the variability and consistency among emotional expressions in different individuals, is central to one's own interpretation of the imminent environment. A factorial design was used to systematically test the interaction of either constant or variable emotional expressions with constant or variable facial identities in areas involved in face processing using functional magnetic resonance imaging. Results Previous studies suggest a predominant role of the amygdala in the assessment of emotional variability. Here we extend this view by showing that this structure activated to faces with changing identities that display constant emotional expressions. Within this condition, amygdala activation was dependent on the type and intensity of displayed emotion, with significant responses to fearful expressions and, to a lesser extent so to neutral and happy expressions. In contrast, the lateral fusiform gyrus showed a binary pattern of increased activation to changing stimulus features while it was also differentially responsive to the intensity of displayed emotion when processing different facial identities. Conclusions These results suggest that the amygdala might serve to detect constant facial emotions in different individuals, complementing its established role for detecting emotional variability.

  14. Optogenetic probing of nerve and muscle function after facial nerve lesion in the mouse whisker system

    Science.gov (United States)

    Bandi, Akhil; Vajtay, Thomas J.; Upadhyay, Aman; Yiantsos, S. Olga; Lee, Christian R.; Margolis, David J.

    2018-02-01

    Optogenetic modulation of neural circuits has opened new avenues into neuroscience research, allowing the control of cellular activity of genetically specified cell types. Optogenetics is still underdeveloped in the peripheral nervous system, yet there are many applications related to sensorimotor function, pain and nerve injury that would be of great benefit. We recently established a method for non-invasive, transdermal optogenetic stimulation of the facial muscles that control whisker movements in mice (Park et al., 2016, eLife, e14140)1. Here we present results comparing the effects of optogenetic stimulation of whisker movements in mice that express channelrhodopsin-2 (ChR2) selectively in either the facial motor nerve (ChAT-ChR2 mice) or muscle (Emx1-ChR2 or ACTA1-ChR2 mice). We tracked changes in nerve and muscle function before and up to 14 days after nerve transection. Optogenetic 460 nm transdermal stimulation of the distal cut nerve showed that nerve degeneration progresses rapidly over 24 hours. In contrast, the whisker movements evoked by optogenetic muscle stimulation were up-regulated after denervation, including increased maximum protraction amplitude, increased sensitivity to low-intensity stimuli, and more sustained muscle contractions (reduced adaptation). Our results indicate that peripheral optogenetic stimulation is a promising technique for probing the timecourse of functional changes of both nerve and muscle, and holds potential for restoring movement after paralysis induced by nerve damage or motoneuron degeneration.

  15. Representing affective facial expressions for robots and embodied conversational agents by facial landmarks

    NARCIS (Netherlands)

    Liu, C.; Ham, J.R.C.; Postma, E.O.; Midden, C.J.H.; Joosten, B.; Goudbeek, M.

    2013-01-01

    Affective robots and embodied conversational agents require convincing facial expressions to make them socially acceptable. To be able to virtually generate facial expressions, we need to investigate the relationship between technology and human perception of affective and social signals. Facial

  16. A new look at emotion perception: Concepts speed and shape facial emotion recognition.

    Science.gov (United States)

    Nook, Erik C; Lindquist, Kristen A; Zaki, Jamil

    2015-10-01

    Decades ago, the "New Look" movement challenged how scientists thought about vision by suggesting that conceptual processes shape visual perceptions. Currently, affective scientists are likewise debating the role of concepts in emotion perception. Here, we utilized a repetition-priming paradigm in conjunction with signal detection and individual difference analyses to examine how providing emotion labels-which correspond to discrete emotion concepts-affects emotion recognition. In Study 1, pairing emotional faces with emotion labels (e.g., "sad") increased individuals' speed and sensitivity in recognizing emotions. Additionally, individuals with alexithymia-who have difficulty labeling their own emotions-struggled to recognize emotions based on visual cues alone, but not when emotion labels were provided. Study 2 replicated these findings and further demonstrated that emotion concepts can shape perceptions of facial expressions. Together, these results suggest that emotion perception involves conceptual processing. We discuss the implications of these findings for affective, social, and clinical psychology. (c) 2015 APA, all rights reserved).

  17. Achados fonoaudiológicos em pacientes submetidos a anastomose hipoglosso facial Phonoaudiological findings in patients submitted to hypoglossal-facial anastomosis

    Directory of Open Access Journals (Sweden)

    Elisabete C. C. F. Silva

    2003-06-01

    present research is to verify the evidence of mobility in the phonoarticulate organs, speech function, chew and swallowing in patients sujected to HFA. STUDY DESIGN: Clinical prospective. MATERIAL AND METHOD: Eight patients with peripheral facial paralysis (PFP were evaluated and subjected to HFA at UNIFESP/EPM in the period from 1989 to 2000, with 6 females and 2 males, aged between 21 and 71 years with an average of 50 years. Of these, 5 after exeresis of Acoustic Neurinoma, 1 after exeresis of Fibrosarcoma, 1 after a gunshot wound and 1 after idiopathic peripheral facial paralysis of poor evolution. In the phonoaudiological evaluation, the protocol used involved identification data; classification of the facial nerve; treatments carried out; facial symmetry in repose and on voluntary movement; synhinesis of the eyes, mouth, nose and cheeks; phonoarticulate and tongue disorders; changes in chew and of the palate and a questionary concerning the appearence of the respective disturbances. RESULTS: The degree of pos anastomosis and reabilitation ranged to the eyes between II and V and to the mouth between III and V (House & Brakemann, 1985. We came to the conclusion that the recover was satisfactory and important but patients'recover expectation were inferior. There have been noted: articulatory imprecision chewing disfunction, deficit sphincteral function of oral muscles and disphage.

  18. Visual Scan Paths and Recognition of Facial Identity in Autism Spectrum Disorder and Typical Development

    Science.gov (United States)

    Wilson, C. Ellie; Palermo, Romina; Brock, Jon

    2012-01-01

    Background Previous research suggests that many individuals with autism spectrum disorder (ASD) have impaired facial identity recognition, and also exhibit abnormal visual scanning of faces. Here, two hypotheses accounting for an association between these observations were tested: i) better facial identity recognition is associated with increased gaze time on the Eye region; ii) better facial identity recognition is associated with increased eye-movements around the face. Methodology and Principal Findings Eye-movements of 11 children with ASD and 11 age-matched typically developing (TD) controls were recorded whilst they viewed a series of faces, and then completed a two alternative forced-choice recognition memory test for the faces. Scores on the memory task were standardized according to age. In both groups, there was no evidence of an association between the proportion of time spent looking at the Eye region of faces and age-standardized recognition performance, thus the first hypothesis was rejected. However, the ‘Dynamic Scanning Index’ – which was incremented each time the participant saccaded into and out of one of the core-feature interest areas – was strongly associated with age-standardized face recognition scores in both groups, even after controlling for various other potential predictors of performance. Conclusions and Significance In support of the second hypothesis, results suggested that increased saccading between core-features was associated with more accurate face recognition ability, both in typical development and ASD. Causal directions of this relationship remain undetermined. PMID:22666378

  19. Imaging inflammatory acne: lesion detection and tracking

    Science.gov (United States)

    Cula, Gabriela O.; Bargo, Paulo R.; Kollias, Nikiforos

    2010-02-01

    It is known that effectiveness of acne treatment increases when the lesions are detected earlier, before they could progress into mature wound-like lesions, which lead to scarring and discoloration. However, little is known about the evolution of acne from early signs until after the lesion heals. In this work we computationally characterize the evolution of inflammatory acne lesions, based on analyzing cross-polarized images that document acne-prone facial skin over time. Taking skin images over time, and being able to follow skin features in these images present serious challenges, due to change in the appearance of skin, difficulty in repositioning the subject, involuntary movement such as breathing. A computational technique for automatic detection of lesions by separating the background normal skin from the acne lesions, based on fitting Gaussian distributions to the intensity histograms, is presented. In order to track and quantify the evolution of lesions, in terms of the degree of progress or regress, we designed a study to capture facial skin images from an acne-prone young individual, followed over the course of 3 different time points. Based on the behavior of the lesions between two consecutive time points, the automatically detected lesions are classified in four categories: new lesions, resolved lesions (i.e. lesions that disappear completely), lesions that are progressing, and lesions that are regressing (i.e. lesions in the process of healing). The classification our methods achieve correlates well with visual inspection of a trained human grader.

  20. Recurrent unilateral facial nerve palsy in a child with dehiscent facial nerve canal

    Directory of Open Access Journals (Sweden)

    Christopher Liu

    2016-12-01

    Full Text Available Objective: The dehiscent facial nerve canal has been well documented in histopathological studies of temporal bones as well as in clinical setting. We describe clinical and radiologic features of a child with recurrent facial nerve palsy and dehiscent facial nerve canal. Methods: Retrospective chart review. Results: A 5-year-old male was referred to the otolaryngology clinic for evaluation of recurrent acute otitis media and hearing loss. He also developed recurrent left peripheral FN palsy associated with episodes of bilateral acute otitis media. High resolution computed tomography of the temporal bones revealed incomplete bony coverage of the tympanic segment of the left facial nerve. Conclusions: Recurrent peripheral FN palsy may occur in children with recurrent acute otitis media in the presence of a dehiscent facial nerve canal. Facial nerve canal dehiscence should be considered in the differential diagnosis of children with recurrent peripheral FN palsy.

  1. Pediatric facial injuries: It's management

    OpenAIRE

    Singh, Geeta; Mohammad, Shadab; Pal, U. S.; Hariram,; Malkunje, Laxman R.; Singh, Nimisha

    2011-01-01

    Background: Facial injuries in children always present a challenge in respect of their diagnosis and management. Since these children are of a growing age every care should be taken so that later the overall growth pattern of the facial skeleton in these children is not jeopardized. Purpose: To access the most feasible method for the management of facial injuries in children without hampering the facial growth. Materials and Methods: Sixty child patients with facial trauma were selected rando...

  2. Automated syndrome detection in a set of clinical facial photographs.

    Science.gov (United States)

    Boehringer, Stefan; Guenther, Manuel; Sinigerova, Stella; Wurtz, Rolf P; Horsthemke, Bernhard; Wieczorek, Dagmar

    2011-09-01

    Computer systems play an important role in clinical genetics and are a routine part of finding clinical diagnoses but make it difficult to fully exploit information derived from facial appearance. So far, automated syndrome diagnosis based on digital, facial photographs has been demonstrated under study conditions but has not been applied in clinical practice. We have therefore investigated how well statistical classifiers trained on study data comprising 202 individuals affected by one of 14 syndromes could classify a set of 91 patients for whom pictures were taken under regular, less controlled conditions in clinical practice. We found a classification accuracy of 21% percent in the clinical sample representing a ratio of 3.0 over a random choice. This contrasts with a 60% accuracy or 8.5 ratio in the training data. Producing average images in both groups from sets of pictures for each syndrome demonstrates that the groups exhibit large phenotypic differences explaining discrepancies in accuracy. A broadening of the data set is suggested in order to improve accuracy in clinical practice. In order to further this goal, a software package is made available that allows application of the procedures and contributions toward an improved data set. Copyright © 2011 Wiley-Liss, Inc.

  3. A Review of Techniques for Detection of Movement Intention Using Movement-Related Cortical Potentials

    Directory of Open Access Journals (Sweden)

    Aqsa Shakeel

    2015-01-01

    Full Text Available The movement-related cortical potential (MRCP is a low-frequency negative shift in the electroencephalography (EEG recording that takes place about 2 seconds prior to voluntary movement production. MRCP replicates the cortical processes employed in planning and preparation of movement. In this study, we recapitulate the features such as signal’s acquisition, processing, and enhancement and different electrode montages used for EEG data recoding from different studies that used MRCPs to predict the upcoming real or imaginary movement. An authentic identification of human movement intention, accompanying the knowledge of the limb engaged in the performance and its direction of movement, has a potential implication in the control of external devices. This information could be helpful in development of a proficient patient-driven rehabilitation tool based on brain-computer interfaces (BCIs. Such a BCI paradigm with shorter response time appears more natural to the amputees and can also induce plasticity in brain. Along with different training schedules, this can lead to restoration of motor control in stroke patients.

  4. Facial Sports Injuries

    Science.gov (United States)

    ... the patient has HIV or hepatitis. Facial Fractures Sports injuries can cause potentially serious broken bones or fractures of the face. Common symptoms of facial fractures include: swelling and bruising, ...

  5. Children’s Empathy and Their Perception and Evaluation of Facial Pain Expression: An Eye Tracking Study

    Directory of Open Access Journals (Sweden)

    Zhiqiang Yan

    2017-12-01

    Full Text Available The function of empathic concern to process pain is a product of evolutionary adaptation. Focusing on 5- to 6-year old children, the current study employed eye-tracking in an odd-one-out task (searching for the emotional facial expression among neutral expressions, N = 47 and a pain evaluation task (evaluating the pain intensity of a facial expression, N = 42 to investigate the relationship between children’s empathy and their behavioral and perceptual response to facial pain expression. We found children detected painful expression faster than others (angry, sad, and happy, children high in empathy performed better on searching facial expression of pain, and gave higher evaluation of pain intensity; and rating for pain in painful expressions was best predicted by a self-reported empathy score. As for eye-tracking in pain detection, children fixated on pain more quickly, less frequently and for shorter times. Of facial clues, children fixated on eyes and mouth more quickly, more frequently and for longer times. These results implied that painful facial expression was different from others in a cognitive sense, and children’s empathy might facilitate their search and make them perceive the intensity of observed pain on the higher side.

  6. Outcome of a graduated minimally invasive facial reanimation in patients with facial paralysis.

    Science.gov (United States)

    Holtmann, Laura C; Eckstein, Anja; Stähr, Kerstin; Xing, Minzhi; Lang, Stephan; Mattheis, Stefan

    2017-08-01

    Peripheral paralysis of the facial nerve is the most frequent of all cranial nerve disorders. Despite advances in facial surgery, the functional and aesthetic reconstruction of a paralyzed face remains a challenge. Graduated minimally invasive facial reanimation is based on a modular principle. According to the patients' needs, precondition, and expectations, the following modules can be performed: temporalis muscle transposition and facelift, nasal valve suspension, endoscopic brow lift, and eyelid reconstruction. Applying a concept of a graduated minimally invasive facial reanimation may help minimize surgical trauma and reduce morbidity. Twenty patients underwent a graduated minimally invasive facial reanimation. A retrospective chart review was performed with a follow-up examination between 1 and 8 months after surgery. The FACEgram software was used to calculate pre- and postoperative eyelid closure, the level of brows, nasal, and philtral symmetry as well as oral commissure position at rest and oral commissure excursion with smile. As a patient-oriented outcome parameter, the Glasgow Benefit Inventory questionnaire was applied. There was a statistically significant improvement in the postoperative score of eyelid closure, brow asymmetry, nasal asymmetry, philtral asymmetry as well as oral commissure symmetry at rest (p facial nerve repair or microneurovascular tissue transfer cannot be applied, graduated minimally invasive facial reanimation is a promising option to restore facial function and symmetry at rest.

  7. Impact of individually controlled facially applied air movement on perceived air quality at high humidity

    DEFF Research Database (Denmark)

    Skwarczynski, Mariusz; Melikov, Arsen Krikor; Kaczmarczyk, J.

    2010-01-01

    and local air velocity under a constant air temperature of 26 degrees C, namely: 70% relative humidity without air movement, 30% relative humidity without air movement and 70% relative humidity with air movement under isothermal conditions. Personalized ventilation was used to supply room air from the front...

  8. Does Facial Amimia Impact the Recognition of Facial Emotions? An EMG Study in Parkinson’s Disease

    Science.gov (United States)

    Argaud, Soizic; Delplanque, Sylvain; Houvenaghel, Jean-François; Auffret, Manon; Duprez, Joan; Vérin, Marc; Grandjean, Didier; Sauleau, Paul

    2016-01-01

    According to embodied simulation theory, understanding other people’s emotions is fostered by facial mimicry. However, studies assessing the effect of facial mimicry on the recognition of emotion are still controversial. In Parkinson’s disease (PD), one of the most distinctive clinical features is facial amimia, a reduction in facial expressiveness, but patients also show emotional disturbances. The present study used the pathological model of PD to examine the role of facial mimicry on emotion recognition by investigating EMG responses in PD patients during a facial emotion recognition task (anger, joy, neutral). Our results evidenced a significant decrease in facial mimicry for joy in PD, essentially linked to the absence of reaction of the zygomaticus major and the orbicularis oculi muscles in response to happy avatars, whereas facial mimicry for expressions of anger was relatively preserved. We also confirmed that PD patients were less accurate in recognizing positive and neutral facial expressions and highlighted a beneficial effect of facial mimicry on the recognition of emotion. We thus provide additional arguments for embodied simulation theory suggesting that facial mimicry is a potential lever for therapeutic actions in PD even if it seems not to be necessarily required in recognizing emotion as such. PMID:27467393

  9. Techniques for Clutter Suppression in the Presence of Body Movements during the Detection of Respiratory Activity through UWB Radars

    Directory of Open Access Journals (Sweden)

    Antonio Lazaro

    2014-02-01

    Full Text Available This paper focuses on the feasibility of tracking the chest wall movement of a human subject during respiration from the waveforms recorded using an impulse-radio (IR ultra-wideband radar. The paper describes the signal processing to estimate sleep apnea detection and breathing rate. Some techniques to solve several problems in these types of measurements, such as the clutter suppression, body movement and body orientation detection are described. Clutter suppression is achieved using a moving averaging filter to dynamically estimate it. The artifacts caused by body movements are removed using a threshold method before analyzing the breathing signal. The motion is detected using the time delay that maximizes the received signal after a clutter removing algorithm is applied. The periods in which the standard deviations of the time delay exceed a threshold are considered macro-movements and they are neglected. The sleep apnea intervals are detected when the breathing signal is below a threshold. The breathing rate is determined from the robust spectrum estimation based on Lomb periodogram algorithm. On the other hand the breathing signal amplitude depends on the body orientation respect to the antennas, and this could be a problem. In this case, in order to maximize the signal-to-noise ratio, multiple sensors are proposed to ensure that the backscattered signal can be detected by at least one sensor, regardless of the direction the human subject is facing. The feasibility of the system is compared with signals recorded by a microphone.

  10. Incongruence Between Observers’ and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli

    Directory of Open Access Journals (Sweden)

    Tanja S. H. Wingenbach

    2018-06-01

    Full Text Available According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a explicit imitation of viewed facial emotional expressions (stimulus-congruent condition, (b pen-holding with the lips (stimulus-incongruent condition, and (c passive viewing (control condition. It was hypothesised that (1 experimental condition (a and (b result in greater facial muscle activity than (c, (2 experimental condition (a increases emotion recognition accuracy from others’ faces compared to (c, (3 experimental condition (b lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c. Participants (42 males, 42 females underwent a facial emotion recognition experiment (ADFES-BIV while electromyography (EMG was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

  11. Incongruence Between Observers' and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli.

    Science.gov (United States)

    Wingenbach, Tanja S H; Brosnan, Mark; Pfaltz, Monique C; Plichta, Michael M; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

  12. Incongruence Between Observers’ and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli

    Science.gov (United States)

    Wingenbach, Tanja S. H.; Brosnan, Mark; Pfaltz, Monique C.; Plichta, Michael M.; Ashwin, Chris

    2018-01-01

    According to embodied cognition accounts, viewing others’ facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others’ facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others’ faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions’ order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed. PMID:29928240

  13. A novel human-machine interface based on recognition of multi-channel facial bioelectric signals

    International Nuclear Information System (INIS)

    Razazadeh, Iman Mohammad; Firoozabadi, S. Mohammad; Golpayegani, S.M.R.H.; Hu, H.

    2011-01-01

    Full text: This paper presents a novel human-machine interface for disabled people to interact with assistive systems for a better quality of life. It is based on multichannel forehead bioelectric signals acquired by placing three pairs of electrodes (physical channels) on the Fron-tails and Temporalis facial muscles. The acquired signals are passes through a parallel filter bank to explore three different sub-bands related to facial electromyogram, electrooculogram and electroencephalogram. The root mean features of the bioelectric signals analyzed within non-overlapping 256 ms windows were extracted. The subtractive fuzzy c-means clustering method (SFCM) was applied to segment the feature space and generate initial fuzzy based Takagi-Sugeno rules. Then, an adaptive neuro-fuzzy inference system is exploited to tune up the premises and consequence parameters of the extracted SFCMs. rules. The average classifier discriminating ratio for eight different facial gestures (smiling, frowning, pulling up left/right lips corner, eye movement to left/right/up/down is between 93.04% and 96.99% according to different combinations and fusions of logical features. Experimental results show that the proposed interface has a high degree of accuracy and robustness for discrimination of 8 fundamental facial gestures. Some potential and further capabilities of our approach in human-machine interfaces are also discussed. (author)

  14. Facial orientation and facial shape in extant great apes: a geometric morphometric analysis of covariation.

    Science.gov (United States)

    Neaux, Dimitri; Guy, Franck; Gilissen, Emmanuel; Coudyzer, Walter; Vignaud, Patrick; Ducrocq, Stéphane

    2013-01-01

    The organization of the bony face is complex, its morphology being influenced in part by the rest of the cranium. Characterizing the facial morphological variation and craniofacial covariation patterns in extant hominids is fundamental to the understanding of their evolutionary history. Numerous studies on hominid facial shape have proposed hypotheses concerning the relationship between the anterior facial shape, facial block orientation and basicranial flexion. In this study we test these hypotheses in a sample of adult specimens belonging to three extant hominid genera (Homo, Pan and Gorilla). Intraspecific variation and covariation patterns are analyzed using geometric morphometric methods and multivariate statistics, such as partial least squared on three-dimensional landmarks coordinates. Our results indicate significant intraspecific covariation between facial shape, facial block orientation and basicranial flexion. Hominids share similar characteristics in the relationship between anterior facial shape and facial block orientation. Modern humans exhibit a specific pattern in the covariation between anterior facial shape and basicranial flexion. This peculiar feature underscores the role of modern humans' highly-flexed basicranium in the overall integration of the cranium. Furthermore, our results are consistent with the hypothesis of a relationship between the reduction of the value of the cranial base angle and a downward rotation of the facial block in modern humans, and to a lesser extent in chimpanzees.

  15. Role of electrical stimulation added to conventional therapy in patients with idiopathic facial (Bell) palsy.

    Science.gov (United States)

    Tuncay, Figen; Borman, Pinar; Taşer, Burcu; Ünlü, İlhan; Samim, Erdal

    2015-03-01

    The aim of this study was to determine the efficacy of electrical stimulation when added to conventional physical therapy with regard to clinical and neurophysiologic changes in patients with Bell palsy. This was a randomized controlled trial. Sixty patients diagnosed with Bell palsy (39 right sided, 21 left sided) were included in the study. Patients were randomly divided into two therapy groups. Group 1 received physical therapy applying hot pack, facial expression exercises, and massage to the facial muscles, whereas group 2 received electrical stimulation treatment in addition to the physical therapy, 5 days per week for a period of 3 wks. Patients were evaluated clinically and electrophysiologically before treatment (at the fourth week of the palsy) and again 3 mos later. Outcome measures included the House-Brackmann scale and Facial Disability Index scores, as well as facial nerve latencies and amplitudes of compound muscle action potentials derived from the frontalis and orbicularis oris muscles. Twenty-nine men (48.3%) and 31 women (51.7%) with Bell palsy were included in the study. In group 1, 16 (57.1%) patients had no axonal degeneration and 12 (42.9%) had axonal degeneration, compared with 17 (53.1%) and 15 (46.9%) patients in group 2, respectively. The baseline House-Brackmann and Facial Disability Index scores were similar between the groups. At 3 mos after onset, the Facial Disability Index scores were improved similarly in both groups. The classification of patients according to House-Brackmann scale revealed greater improvement in group 2 than in group 1. The mean motor nerve latencies and compound muscle action potential amplitudes of both facial muscles were statistically shorter in group 2, whereas only the mean motor latency of the frontalis muscle decreased in group 1. The addition of 3 wks of daily electrical stimulation shortly after facial palsy onset (4 wks), improved functional facial movements and electrophysiologic outcome measures at

  16. [Descending hypoglossal branch-facial nerve anastomosis in treating unilateral facial palsy after acoustic neuroma resection].

    Science.gov (United States)

    Liang, Jiantao; Li, Mingchu; Chen, Ge; Guo, Hongchuan; Zhang, Qiuhang; Bao, Yuhai

    2015-12-15

    To evaluate the efficiency of the descending hypoglossal branch-facial nerve anastomosis for the severe facial palsy after acoustic neuroma resection. The clinical data of 14 patients (6 males, 8 females, average age 45. 6 years old) underwent descending hypoglossal branch-facial nerve anastomosis for treatment of unilateral facial palsy was analyzed retrospectively. All patients previously had undergone resection of a large acoustic neuroma. House-Brackmann (H-B) grading system was used to evaluate the pre-, post-operative and follow up facial nerve function status. 12 cases (85.7%) had long follow up, with an average follow-up period of 24. 6 months. 6 patients had good outcome (H-B 2 - 3 grade); 5 patients had fair outcome (H-B 3 - 4 grade) and 1 patient had poor outcome (H-B 5 grade) Only 1 patient suffered hemitongue myoparalysis owing to the operation. Descending hypoglossal branch-facial nerve anastomosis is effective for facial reanimation, and it has little impact on the function of chewing, swallowing and pronunciation of the patients compared with the traditional hypoglossal-facial nerve anastomosis.

  17. Accurate landmarking of three-dimensional facial data in the presence of facial expressions and occlusions using a three-dimensional statistical facial feature model.

    Science.gov (United States)

    Zhao, Xi; Dellandréa, Emmanuel; Chen, Liming; Kakadiaris, Ioannis A

    2011-10-01

    Three-dimensional face landmarking aims at automatically localizing facial landmarks and has a wide range of applications (e.g., face recognition, face tracking, and facial expression analysis). Existing methods assume neutral facial expressions and unoccluded faces. In this paper, we propose a general learning-based framework for reliable landmark localization on 3-D facial data under challenging conditions (i.e., facial expressions and occlusions). Our approach relies on a statistical model, called 3-D statistical facial feature model, which learns both the global variations in configurational relationships between landmarks and the local variations of texture and geometry around each landmark. Based on this model, we further propose an occlusion classifier and a fitting algorithm. Results from experiments on three publicly available 3-D face databases (FRGC, BU-3-DFE, and Bosphorus) demonstrate the effectiveness of our approach, in terms of landmarking accuracy and robustness, in the presence of expressions and occlusions.

  18. Behavioral dissociation between emotional and non-emotional facial expressions in congenital prosopagnosia.

    Science.gov (United States)

    Daini, Roberta; Comparetti, Chiara M; Ricciardelli, Paola

    2014-01-01

    Neuropsychological and neuroimaging studies have shown that facial recognition and emotional expressions are dissociable. However, it is unknown if a single system supports the processing of emotional and non-emotional facial expressions. We aimed to understand if individuals with impairment in face recognition from birth (congenital prosopagnosia, CP) can use non-emotional facial expressions to recognize a face as an already seen one, and thus, process this facial dimension independently from features (which are impaired in CP), and basic emotional expressions. To this end, we carried out a behavioral study in which we compared the performance of 6 CP individuals to that of typical development individuals, using upright and inverted faces. Four avatar faces with a neutral expression were presented in the initial phase. The target faces presented in the recognition phase, in which a recognition task was requested (2AFC paradigm), could be identical (neutral) to those of the initial phase or present biologically plausible changes to features, non-emotional expressions, or emotional expressions. After this task, a second task was performed, in which the participants had to detect whether or not the recognized face exactly matched the study face or showed any difference. The results confirmed the CPs' impairment in the configural processing of the invariant aspects of the face, but also showed a spared configural processing of non-emotional facial expression (task 1). Interestingly and unlike the non-emotional expressions, the configural processing of emotional expressions was compromised in CPs and did not improve their change detection ability (task 2). These new results have theoretical implications for face perception models since they suggest that, at least in CPs, non-emotional expressions are processed configurally, can be dissociated from other facial dimensions, and may serve as a compensatory strategy to achieve face recognition.

  19. Correction of Facial Deformity in Sturge–Weber Syndrome

    Science.gov (United States)

    Yamaguchi, Kazuaki; Lonic, Daniel; Chen, Chit

    2016-01-01

    Background: Although previous studies have reported soft-tissue management in surgical treatment of Sturge–Weber syndrome (SWS), there are few reports describing facial bone surgery in this patient group. The purpose of this study is to examine the validity of our multidisciplinary algorithm for correcting facial deformities associated with SWS. To the best of our knowledge, this is the first study on orthognathic surgery for SWS patients. Methods: A retrospective chart review included 2 SWS patients who completed the surgical treatment algorithm. Radiographic and clinical data were recorded, and a treatment algorithm was derived. Results: According to the Roach classification, the first patient was classified as type I presenting with both facial and leptomeningeal vascular anomalies without glaucoma and the second patient as type II presenting only with a hemifacial capillary malformation. Considering positive findings in seizure history and intracranial vascular anomalies in the first case, the anesthetic management was modified to omit hypotensive anesthesia because of the potential risk of intracranial pressure elevation. Primarily, both patients underwent 2-jaw orthognathic surgery and facial bone contouring including genioplasty, zygomatic reduction, buccal fat pad removal, and masseter reduction without major complications. In the second step, the volume and distribution of facial soft tissues were altered by surgical resection and reposition. Both patients were satisfied with the surgical result. Conclusions: Our multidisciplinary algorithm can systematically detect potential risk factors. Correction of the asymmetric face by successive bone and soft-tissue surgery enables the patients to reduce their psychosocial burden and increase their quality of life. PMID:27622111

  20. Outcomes of Buccinator Treatment With Botulinum Toxin in Facial Synkinesis.

    Science.gov (United States)

    Patel, Priyesh N; Owen, Scott R; Norton, Cathey P; Emerson, Brandon T; Bronaugh, Andrea B; Ries, William R; Stephan, Scott J

    2018-05-01

    The buccinator, despite being a prominent midface muscle, has been previously overlooked as a target in the treatment of facial synkinesis with botulinum toxin. To evaluate outcomes of patients treated with botulinum toxin to the buccinator muscle in the setting of facial synkinesis. Prospective cohort study of patients who underwent treatment for facial synkinesis with botulinum toxin over multiple treatment cycles during a 1-year period was carried out in a tertiary referral center. Botulinum toxin treatment of facial musculature, including treatment cycles with and without buccinator injections. Subjective outcomes were evaluated using the Synkinesis Assessment Questionnaire (SAQ) prior to injection of botulinum toxin and 2 weeks after treatment. Outcomes of SAQ preinjection and postinjection scores were compared in patients who had at least 1 treatment cycle with and without buccinator injections. Subanalysis was performed on SAQ questions specific to buccinator function (facial tightness and lip movement). Of 84 patients who received botulinum toxin injections for facial synkinesis, 33 received injections into the buccinator muscle. Of the 33, 23 met inclusion criteria (19 [82.6%] women; mean [SD] age, 46 [10] years). These patients presented for 82 treatment visits, of which 44 (53.6%) involved buccinator injections and 38 (46.4%) were without buccinator injections. The most common etiology of facial paralysis included vestibular schwannoma (10 [43.5%] participants) and Bell Palsy (9 [39.1%] participants). All patients had improved posttreatment SAQ scores compared with prebotulinum scores regardless of buccinator treatment. Compared with treatment cycles in which the buccinator was not addressed, buccinator injections resulted in lower total postinjection SAQ scores (45.9; 95% CI, 38.8-46.8; vs 42.8; 95% CI, 41.3-50.4; P = .43) and greater differences in prebotox and postbotox injection outcomes (18; 95% CI, 16.2-21.8; vs 19; 95% CI, 14.2-21.8; P

  1. Impaired Overt Facial Mimicry in Response to Dynamic Facial Expressions in High-Functioning Autism Spectrum Disorders

    Science.gov (United States)

    Yoshimura, Sayaka; Sato, Wataru; Uono, Shota; Toichi, Motomi

    2015-01-01

    Previous electromyographic studies have reported that individuals with autism spectrum disorders (ASD) exhibited atypical patterns of facial muscle activity in response to facial expression stimuli. However, whether such activity is expressed in visible facial mimicry remains unknown. To investigate this issue, we videotaped facial responses in…

  2. The MPI facial expression database--a validated database of emotional and conversational facial expressions.

    Directory of Open Access Journals (Sweden)

    Kathrin Kaulard

    Full Text Available The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision to investigate the processing of a wider range of natural

  3. Human Age Estimation Method Robust to Camera Sensor and/or Face Movement

    Directory of Open Access Journals (Sweden)

    Dat Tien Nguyen

    2015-08-01

    Full Text Available Human age can be employed in many useful real-life applications, such as customer service systems, automatic vending machines, entertainment, etc. In order to obtain age information, image-based age estimation systems have been developed using information from the human face. However, limitations exist for current age estimation systems because of the various factors of camera motion and optical blurring, facial expressions, gender, etc. Motion blurring can usually be presented on face images by the movement of the camera sensor and/or the movement of the face during image acquisition. Therefore, the facial feature in captured images can be transformed according to the amount of motion, which causes performance degradation of age estimation systems. In this paper, the problem caused by motion blurring is addressed and its solution is proposed in order to make age estimation systems robust to the effects of motion blurring. Experiment results show that our method is more efficient for enhancing age estimation performance compared with systems that do not employ our method.

  4. Human Movement Detection and Identification Using Pyroelectric Infrared Sensors

    Directory of Open Access Journals (Sweden)

    Jaeseok Yun

    2014-05-01

    Full Text Available Pyroelectric infrared (PIR sensors are widely used as a presence trigger, but the analog output of PIR sensors depends on several other aspects, including the distance of the body from the PIR sensor, the direction and speed of movement, the body shape and gait. In this paper, we present an empirical study of human movement detection and identification using a set of PIR sensors. We have developed a data collection module having two pairs of PIR sensors orthogonally aligned and modified Fresnel lenses. We have placed three PIR-based modules in a hallway for monitoring people; one module on the ceiling; two modules on opposite walls facing each other. We have collected a data set from eight subjects when walking in three different conditions: two directions (back and forth, three distance intervals (close to one wall sensor, in the middle, close to the other wall sensor and three speed levels (slow, moderate, fast. We have used two types of feature sets: a raw data set and a reduced feature set composed of amplitude and time to peaks; and passage duration extracted from each PIR sensor. We have performed classification analysis with well-known machine learning algorithms, including instance-based learning and support vector machine. Our findings show that with the raw data set captured from a single PIR sensor of each of the three modules, we could achieve more than 92% accuracy in classifying the direction and speed of movement, the distance interval and identifying subjects. We could also achieve more than 94% accuracy in classifying the direction, speed and distance and identifying subjects using the reduced feature set extracted from two pairs of PIR sensors of each of the three modules.

  5. Facial EMG responses to dynamic emotional facial expressions in boys with disruptive behavior disorders

    NARCIS (Netherlands)

    Wied, de M.; Boxtel, van Anton; Zaalberg, R.; Goudena, P.P.; Matthys, W.

    2006-01-01

    Based on the assumption that facial mimicry is a key factor in emotional empathy, and clinical observations that children with disruptive behavior disorders (DBD) are weak empathizers, the present study explored whether DBD boys are less facially responsive to facial expressions of emotions than

  6. Case Report: A true median facial cleft (crano-facial dysraphia ...

    African Journals Online (AJOL)

    Case Report: A true median facial cleft (crano-facial dysraphia, atessier type O) in Bingham University Teaching Hospital, Jos. ... Patient had a multidisciplinary care by the obstetrician, Neonatologist, anesthesiologist and the plastic surgery team who scheduled a soft tissue repair of the upper lip defect, columella and ...

  7. Intuitive Face Judgments Rely on Holistic Eye Movement Pattern

    Directory of Open Access Journals (Sweden)

    Laura F. Mega

    2017-06-01

    Full Text Available Non-verbal signals such as facial expressions are of paramount importance for social encounters. Their perception predominantly occurs without conscious awareness and is effortlessly integrated into social interactions. In other words, face perception is intuitive. Contrary to classical intuition tasks, this work investigates intuitive processes in the realm of every-day type social judgments. Two differently instructed groups of participants judged the authenticity of emotional facial expressions, while their eye movements were recorded: an ‘intuitive group,’ instructed to rely on their “gut feeling” for the authenticity judgments, and a ‘deliberative group,’ instructed to make their judgments after careful analysis of the face. Pixel-wise statistical maps of the resulting eye movements revealed a differential viewing pattern, wherein the intuitive judgments relied on fewer, longer and more centrally located fixations. These markers have been associated with a global/holistic viewing strategy. The holistic pattern of intuitive face judgments is in line with evidence showing that intuition is related to processing the “gestalt” of an object, rather than focusing on details. Our work thereby provides further evidence that intuitive processes are characterized by holistic perception, in an understudied and real world domain of intuition research.

  8. Outcome of different facial nerve reconstruction techniques

    OpenAIRE

    Mohamed, Aboshanif; Omi, Eigo; Honda, Kohei; Suzuki, Shinsuke; Ishikawa, Kazuo

    2016-01-01

    Abstract Introduction: There is no technique of facial nerve reconstruction that guarantees facial function recovery up to grade III. Objective: To evaluate the efficacy and safety of different facial nerve reconstruction techniques. Methods: Facial nerve reconstruction was performed in 22 patients (facial nerve interpositional graft in 11 patients and hypoglossal-facial nerve transfer in another 11 patients). All patients had facial function House-Brackmann (HB) grade VI, either caused by...

  9. Mercury contamination in facial skin lightening creams and its health risks to user.

    Science.gov (United States)

    Ho, Yu Bin; Abdullah, Nor Hidayu; Hamsan, Hazwanee; Tan, Eugenie Sin Sing

    2017-08-01

    This study aims to determine concentrations of mercury in facial skin lightening cream according to different price categories (category I: mercury in samples were less than the United States Food and Drug Administration (USFDA) permitted trace levels (mercury in facial skin lightening creams ranged from not detected to 1.13 mg kg -1 . There was no significant association between concentrations of mercury with price categories (p = 0.12). There was no significant non-carcinogenic health risk due to daily application of the facial skin lightening creams based on assumption of 30 years exposure period (HQ < 1). Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Efficiently detecting outlying behavior in video-game players.

    Science.gov (United States)

    Kim, Young Bin; Kang, Shin Jin; Lee, Sang Hyeok; Jung, Jang Young; Kam, Hyeong Ryeol; Lee, Jung; Kim, Young Sun; Lee, Joonsoo; Kim, Chang Hun

    2015-01-01

    In this paper, we propose a method for automatically detecting the times during which game players exhibit specific behavior, such as when players commonly show excitement, concentration, immersion, and surprise. The proposed method detects such outlying behavior based on the game players' characteristics. These characteristics are captured non-invasively in a general game environment. In this paper, cameras were used to analyze observed data such as facial expressions and player movements. Moreover, multimodal data from the game players (i.e., data regarding adjustments to the volume and the use of the keyboard and mouse) was used to analyze high-dimensional game-player data. A support vector machine was used to efficiently detect outlying behaviors. We verified the effectiveness of the proposed method using games from several genres. The recall rate of the outlying behavior pre-identified by industry experts was approximately 70%. The proposed method can also be used for feedback analysis of various interactive content provided in PC environments.

  11. Efficiently detecting outlying behavior in video-game players

    Directory of Open Access Journals (Sweden)

    Young Bin Kim

    2015-12-01

    Full Text Available In this paper, we propose a method for automatically detecting the times during which game players exhibit specific behavior, such as when players commonly show excitement, concentration, immersion, and surprise. The proposed method detects such outlying behavior based on the game players’ characteristics. These characteristics are captured non-invasively in a general game environment. In this paper, cameras were used to analyze observed data such as facial expressions and player movements. Moreover, multimodal data from the game players (i.e., data regarding adjustments to the volume and the use of the keyboard and mouse was used to analyze high-dimensional game-player data. A support vector machine was used to efficiently detect outlying behaviors. We verified the effectiveness of the proposed method using games from several genres. The recall rate of the outlying behavior pre-identified by industry experts was approximately 70%. The proposed method can also be used for feedback analysis of various interactive content provided in PC environments.

  12. Microbial biofilms on silicone facial prostheses

    NARCIS (Netherlands)

    Ariani, Nina

    2015-01-01

    Facial disfigurements can result from oncologic surgery, trauma and congenital deformities. These disfigurements can be rehabilitated with facial prostheses. Facial prostheses are usually made of silicones. A problem of facial prostheses is that microorganisms can colonize their surface. It is hard

  13. Recognition of Face and Emotional Facial Expressions in Autism

    Directory of Open Access Journals (Sweden)

    Muhammed Tayyib Kadak

    2013-03-01

    Full Text Available Autism is a genetically transferred neurodevelopmental disorder characterized by severe and permanent deficits in many interpersonal relation areas like communication, social interaction and emotional responsiveness. Patients with autism have deficits in face recognition, eye contact and recognition of emotional expression. Both recognition of face and expression of facial emotion carried on face processing. Structural and functional impairment in fusiform gyrus, amygdala, superior temporal sulcus and other brain regions lead to deficits in recognition of face and facial emotion. Therefore studies suggest that face processing deficits resulted in problems in areas of social interaction and emotion in autism. Studies revealed that children with autism had problems in recognition of facial expression and used mouth region more than eye region. It was also shown that autistic patients interpreted ambiguous expressions as negative emotion. In autism, deficits related in various stages of face processing like detection of gaze, face identity, recognition of emotional expression were determined, so far. Social interaction impairments in autistic spectrum disorders originated from face processing deficits during the periods of infancy, childhood and adolescence. Recognition of face and expression of facial emotion could be affected either automatically by orienting towards faces after birth, or by “learning” processes in developmental periods such as identity and emotion processing. This article aimed to review neurobiological basis of face processing and recognition of emotional facial expressions during normal development and in autism.

  14. [Facial tics and spasms].

    Science.gov (United States)

    Potgieser, Adriaan R E; van Dijk, J Marc C; Elting, Jan Willem J; de Koning-Tijssen, Marina A J

    2014-01-01

    Facial tics and spasms are socially incapacitating, but effective treatment is often available. The clinical picture is sufficient for distinguishing between the different diseases that cause this affliction.We describe three cases of patients with facial tics or spasms: one case of tics, which are familiar to many physicians; one case of blepharospasms; and one case of hemifacial spasms. We discuss the differential diagnosis and the treatment possibilities for facial tics and spasms. Early diagnosis and treatment is important, because of the associated social incapacitation. Botulin toxin should be considered as a treatment option for facial tics and a curative neurosurgical intervention should be considered for hemifacial spasms.

  15. Efficacy of Botulinum Toxin Injections in the Treatment of Various Types of Facial Region Disorders

    Directory of Open Access Journals (Sweden)

    Arzu Çoban

    2012-12-01

    Full Text Available OBJECTIVE: Local injection of botulinum toxin is a highly effective treatment option for a wide range of movement disorders and there are reliable sources of information on its indications, effects and safety in clinical practice. In this study, we report our experience with botulinum toxin in the treatment of facial region disorders. METHODS: Patients who had been followed in the Botulinum Toxin Outpatient Clinic of the Neurology Department were retrospectively evaluated. Two preparations of botulinum toxin type A (BT-A were used. The efficacy of BT-A injections was rated according to the improvement in symptoms as follows: marked - 75-100% improvement, good - 50-74%, moderate - 25-49%, and insufficient - less than 25% symptom relief. RESULTS: One hundred eighty-two patients (73 male, 109 female with various facial region disorders were included. The efficacy rates for patients who had very good and good improvement were high in the treatment of blepharospasm, hemifacial spasm, facial synkinesis, and Meige syndrome and moderate for oromandibular dystonia and hypersalivation. Ptosis was the most common side effect. CONCLUSION: According to our results, botulinum toxin was very effective treatment for blepharospasm, Meige syndrome, hemifacial spasm and facial synkinesis, whereas it demonstrated good efficacy in oromandibular dystonia and hypersalivation

  16. Síndrome de dolor facial

    Directory of Open Access Journals (Sweden)

    DR. F. Eugenio Tenhamm

    2014-07-01

    Full Text Available El dolor o algia facial constituye un síndrome doloroso de las estructuras cráneo faciales bajo el cual se agrupan un gran número de enfermedades. La mejor manera de abordar el diagnóstico diferencial de las entidades que causan el dolor facial es usando un algoritmo que identifica cuatro síndromes dolorosos principales que son: las neuralgias faciales, los dolores faciales con síntomas y signos neurológicos, las cefaleas autonómicas trigeminales y los dolores faciales sin síntomas ni signos neurológicos. Una evaluación clínica detallada de los pacientes, permite una aproximación etiológica lo que orienta el estudio diagnóstico y permite ofrecer una terapia específica a la mayoría de los casos

  17. Facial Transplantation Surgery Introduction

    OpenAIRE

    Eun, Seok-Chan

    2015-01-01

    Severely disfiguring facial injuries can have a devastating impact on the patient's quality of life. During the past decade, vascularized facial allotransplantation has progressed from an experimental possibility to a clinical reality in the fields of disease, trauma, and congenital malformations. This technique may now be considered a viable option for repairing complex craniofacial defects for which the results of autologous reconstruction remain suboptimal. Vascularized facial allotranspla...

  18. Comparative histological study of the mammalian facial nucleus.

    Science.gov (United States)

    Furutani, Rui; Sugita, Shoei

    2008-04-01

    We performed comparative Nissl, Klüver-Barrera and Golgi staining studies of the mammalian facial nucleus to classify the morphologically distinct subdivisions and the neuronal types in the rat, rabbit, ferret, Japanese monkey (Macaca fuscata), pig, horse, Risso's dolphin (Grampus griseus), and bottlenose dolphin (Tursiops truncatus). The medial subnucleus was observed in all examined species; however, that of the Risso's and bottlenose dolphins was a poorly-developed structure comprised of scattered neurons. The medial subnuclei of terrestrial mammals were well-developed cytoarchitectonic structures, usually a rounded column comprised of densely clustered neurons. Intermediate and lateral subnuclei were found in all studied mammals, with differences in columnar shape and neuronal types from species to species. The dorsolateral subnucleus was detected in all mammals but the Japanese monkey, whose facial neurons converged into the intermediate subnucleus. The dorsolateral subnuclei of the two dolphin species studied were expanded subdivisions comprised of densely clustered cells. The ventromedial subnuclei of the ferret, pig, and horse were richly-developed columns comprised of large multipolar neurons. Pig and horse facial nuclei contained another ventral cluster, the ventrolateral subnucleus. The facial nuclei of the Japanese monkey and the bottlenose dolphin were similar in their ventral subnuclear organization. Our findings show species-specific subnuclear organization and distribution patterns of distinct types of neurons within morphological discrete subdivisions, reflecting functional differences.

  19. Detecting Deception in the Military Infosphere: Improving and Integrating Human Detection Capabilities with Automated Tools

    Science.gov (United States)

    2007-04-25

    the coders. Figure 1 la shows the basic Analyzer screen before any specific template is selected. Eft , *NN __ N O... ............... .. ]. . iiii...eyes and comers of the mouth, and reductions in gesturing or other gross body movements like foot tapping . D-DIMS captures facial and gross body

  20. Rejuvenecimiento facial en "doble sigma" "Double ogee" facial rejuvenation

    Directory of Open Access Journals (Sweden)

    O. M. Ramírez

    2007-03-01

    Full Text Available Las técnicas subperiósticas descritas por Tessier revolucionaron el tratamiento del envejecimiento facial, recomendando esta vía para tratar los signos tempranos del envejecimiento en pacientes jóvenes y de mediana edad. Psillakis refinó la técnica y Ramírez describió un método más seguro y eficaz de lifting subperióstico, demostrando que la técnica subperióstica de rejuveneciento facial se puede aplicar en el amplio espectro del envejecimiento facial. La introducción del endoscopio en el tratamiento del envejecimiento facial ha abierto una nueva era en la Cirugía Estética. Hoy la disección subperióstica asistida endocópicamente del tercio superior, medio e inferior de la cara, proporciona un medio eficaz para la reposición de los tejidos blandos, con posibilidad de aumento del esqueleto óseo craneofacial, menor edema facial postoperatorio, mínima lesión de las ramas del nervio facial y mejor tratamiento de las mejillas. Este abordaje, desarrollado y refinado durante la última década, se conoce como "Ritidectomía en Doble Sigma". El Arco Veneciano en doble sigma, bien conocido en Arquitectura desde la antigüedad, se caracteriza por ser un trazo armónico de curva convexa y a continuación curva cóncava. Cuando se observa una cara joven, desde un ángulo oblicuo, presenta una distribución característica de los tejidos, previamente descrita para el tercio medio como un arco ojival arquitectónico o una curva en forma de "S". Sin embargo, en un examen más detallado de la cara joven, en la vista de tres cuartos, el perfil completo revela una "arco ojival doble" o una sigma "S" doble. Para ver este recíproco y multicurvilíneo trazo de la belleza, debemos ver la cara en posición oblicua y así poder ver ambos cantos mediales. En esta posición, la cara joven presenta una convexidad característica de la cola de la ceja que confluye en la concavidad de la pared orbitaria lateral formando así el primer arco (superior

  1. Facial transplantation for massive traumatic injuries.

    Science.gov (United States)

    Alam, Daniel S; Chi, John J

    2013-10-01

    This article describes the challenges of facial reconstruction and the role of facial transplantation in certain facial defects and injuries. This information is of value to surgeons assessing facial injuries with massive soft tissue loss or injury. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Quality of life assessment in facial palsy: validation of the Dutch Facial Clinimetric Evaluation Scale.

    Science.gov (United States)

    Kleiss, Ingrid J; Beurskens, Carien H G; Stalmeier, Peep F M; Ingels, Koen J A O; Marres, Henri A M

    2015-08-01

    This study aimed at validating an existing health-related quality of life questionnaire for patients with facial palsy for implementation in the Dutch language and culture. The Facial Clinimetric Evaluation Scale was translated into the Dutch language using a forward-backward translation method. A pilot test with the translated questionnaire was performed in 10 patients with facial palsy and 10 normal subjects. Finally, cross-cultural adaption was accomplished at our outpatient clinic for facial palsy. Analyses for internal consistency, test-retest reliability, construct validity and responsiveness were performed. Ninety-three patients completed the Dutch Facial Clinimetric Evaluation Scale, the Dutch Facial Disability Index, and the Dutch Short Form (36) Health Survey. Cronbach's α, representing internal consistency, was 0.800. Test-retest reliability was shown by an intraclass correlation coefficient of 0.737. Correlations with the House-Brackmann score, Sunnybrook score, Facial Disability Index physical function, and social/well-being function were -0.292, 0.570, 0.713, and 0.575, respectively. The SF-36 domains correlate best with the FaCE social function domain, with the strongest correlation between the both social function domains (r = 0.576). The FaCE score did statistically significantly increase in 35 patients receiving botulinum toxin type A (P = 0.042, Student t test). The domains 'facial comfort' and 'social function' improved statistically significantly as well (P = 0.022 and P = 0.046, respectively, Student t-test). The Dutch Facial Clinimetric Evaluation Scale shows good psychometric values and can be implemented in the management of Dutch-speaking patients with facial palsy in the Netherlands. Translation of the instrument into other languages may lead to widespread use, making evaluation and comparison possible among different providers.

  3. Facial nerve conduction after sclerotherapy in children with facial lymphatic malformations: report of two cases.

    Science.gov (United States)

    Lin, Pei-Jung; Guo, Yuh-Cherng; Lin, Jan-You; Chang, Yu-Tang

    2007-04-01

    Surgical excision is thought to be the standard treatment of choice for lymphatic malformations. However, when the lesions are limited to the face only, surgical scar and facial nerve injury may impair cosmetics and facial expression. Sclerotherapy, an injection of a sclerosing agent directly through the skin into a lesion, is an alternative method. By evaluating facial nerve conduction, we observed the long-term effect of facial lymphatic malformations after intralesional injection of OK-432 and correlated the findings with anatomic outcomes. One 12-year-old boy with a lesion over the right-side preauricular area adjacent to the main trunk of facial nerve and the other 5-year-old boy with a lesion in the left-sided cheek involving the buccinator muscle were enrolled. The follow-up data of more than one year, including clinical appearance, computed tomography (CT) scan and facial nerve evaluation were collected. The facial nerve conduction study was normal in both cases. Blink reflex in both children revealed normal results as well. Complete resolution was noted on outward appearance and CT scan. The neurophysiologic data were compatible with good anatomic and functional outcomes. Our report suggests that the inflammatory reaction of OK-432 did not interfere with adjacent facial nerve conduction.

  4. Differences in Sequential Eye Movement Behavior between Taiwanese and American Viewers

    Directory of Open Access Journals (Sweden)

    Yen Ju eLee

    2016-05-01

    Full Text Available Knowledge of how information is sought in the visual world is useful for predicting and simulating human behavior. Taiwanese participants and American participants were instructed to judge the facial expression of a focal face that was flanked horizontally by other faces while their eye movements were monitored. The Taiwanese participants distributed their eye fixations more widely than American participants, started to look away from the focal face earlier than American participants, and spent a higher percentage of time looking at the flanking faces. Eye movement transition matrices also provided evidence that Taiwanese participants continually, and systematically shifted gaze between focal and flanking faces. Eye movement patterns were less systematic and less prevalent in American participants. This suggests that both cultures utilized different attention allocation strategies. The results highlight the importance of determining sequential eye movement statistics in cross-cultural research on the utilization of visual context.

  5. A comprehensive approach to long-standing facial paralysis based on lengthening temporalis myoplasty.

    Science.gov (United States)

    Labbè, D; Bussu, F; Iodice, A

    2012-06-01

    Long-standing peripheral monolateral facial paralysis in the adult has challenged otolaryngologists, neurologists and plastic surgeons for centuries. Notwithstanding, the ultimate goal of normality of the paralyzed hemi-face with symmetry at rest, and the achievement of a spontaneous symmetrical smile with corneal protection, has not been fully reached. At the beginning of the 20(th) century, the main options were neural reconstructions including accessory to facial nerve transfer and hypoglossal to facial nerve crossover. In the first half of the 20(th) century, various techniques for static correction with autologous temporalis muscle and fascia grafts were proposed as the techniques of Gillies (1934) and McLaughlin (1949). Cross-facial nerve grafts have been performed since the beginning of the 1970s often with the attempt to transplant free-muscle to restore active movements. However, these transplants were non-vascularized, and further evaluations revealed central fibrosis and minimal return of function. A major step was taken in the second half of the 1970s, with the introduction of microneurovascular muscle transfer in facial reanimation, which, often combined in two steps with a cross-facial nerve graft, has become the most popular option for the comprehensive treatment of long-standing facial paralysis. In the second half of the 1990s in France, a regional muscle transfer technique with the definite advantages of being one-step, technically easier and relatively fast, namely lengthening temporalis myoplasty, acquired popularity and consensus among surgeons treating facial paralysis. A total of 111 patients with facial paralysis were treated in Caen between 1997 and 2005 by a single surgeon who developed 2 variants of the technique (V1, V2), each with its advantages and disadvantages, but both based on the same anatomo-functional background and aim, which is transfer of the temporalis muscle tendon on the coronoid process to the lips. For a comprehensive

  6. Dynamic Displays Enhance the Ability to Discriminate Genuine and Posed Facial Expressions of Emotion

    Science.gov (United States)

    Namba, Shushi; Kabir, Russell S.; Miyatani, Makoto; Nakao, Takashi

    2018-01-01

    Accurately gauging the emotional experience of another person is important for navigating interpersonal interactions. This study investigated whether perceivers are capable of distinguishing between unintentionally expressed (genuine) and intentionally manipulated (posed) facial expressions attributed to four major emotions: amusement, disgust, sadness, and surprise. Sensitivity to this discrimination was explored by comparing unstaged dynamic and static facial stimuli and analyzing the results with signal detection theory. Participants indicated whether facial stimuli presented on a screen depicted a person showing a given emotion and whether that person was feeling a given emotion. The results showed that genuine displays were evaluated more as felt expressions than posed displays for all target emotions presented. In addition, sensitivity to the perception of emotional experience, or discriminability, was enhanced in dynamic facial displays, but was less pronounced in the case of static displays. This finding indicates that dynamic information in facial displays contributes to the ability to accurately infer the emotional experiences of another person. PMID:29896135

  7. Facial Nerve Paralysis due to a Pleomorphic Adenoma with the Imaging Characteristics of a Facial Nerve Schwannoma.

    Science.gov (United States)

    Nader, Marc-Elie; Bell, Diana; Sturgis, Erich M; Ginsberg, Lawrence E; Gidley, Paul W

    2014-08-01

    Background Facial nerve paralysis in a patient with a salivary gland mass usually denotes malignancy. However, facial paralysis can also be caused by benign salivary gland tumors. Methods We present a case of facial nerve paralysis due to a benign salivary gland tumor that had the imaging characteristics of an intraparotid facial nerve schwannoma. Results The patient presented to our clinic 4 years after the onset of facial nerve paralysis initially diagnosed as Bell palsy. Computed tomography demonstrated filling and erosion of the stylomastoid foramen with a mass on the facial nerve. Postoperative histopathology showed the presence of a pleomorphic adenoma. Facial paralysis was thought to be caused by extrinsic nerve compression. Conclusions This case illustrates the difficulty of accurate preoperative diagnosis of a parotid gland mass and reinforces the concept that facial nerve paralysis in the context of salivary gland tumors may not always indicate malignancy.

  8. Advances in facial reanimation.

    Science.gov (United States)

    Tate, James R; Tollefson, Travis T

    2006-08-01

    Facial paralysis often has a significant emotional impact on patients. Along with the myriad of new surgical techniques in managing facial paralysis comes the challenge of selecting the most effective procedure for the patient. This review delineates common surgical techniques and reviews state-of-the-art techniques. The options for dynamic reanimation of the paralyzed face must be examined in the context of several patient factors, including age, overall health, and patient desires. The best functional results are obtained with direct facial nerve anastomosis and interpositional nerve grafts. In long-standing facial paralysis, temporalis muscle transfer gives a dependable and quick result. Microvascular free tissue transfer is a reliable technique with reanimation potential whose results continue to improve as microsurgical expertise increases. Postoperative results can be improved with ancillary soft tissue procedures, as well as botulinum toxin. The paper provides an overview of recent advances in facial reanimation, including preoperative assessment, surgical reconstruction options, and postoperative management.

  9. Intratemporal and extratemporal facial nerve schwannoma: CT and MRI findings

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Keum Won [Pohang Medical Center, Pohang (Korea, Republic of); Lee, Ho Kyu; Shin, Ji Hoon; Choi, Choong Gon; Suh, Dae Chul [Asan Medical Center, Ulsan Univ. College of Medicine, Seoul (Korea, Republic of); Cheong, Hae Kwan [Dongguk Univ. College of Medicine, Seoul (Korea, Republic of)

    2001-05-01

    To analyze the characteristics of CT and MRI findings of facial nerve schwannoma in ten patients. Ten patients with pathologically confirmed facial nerve schwannoma, underwent physical and radilolgic examination. The latter involved MRI in all ten and CT scanning in six. We analyzed the location (epicenter), extent and number of involved segments of tumors, tuumor morphology, and changes in adjacent bony structures. The major symptoms of facial nerve schwannoma were facial nerve paralysis in seven cases and hearing loss in six. Epicenters were detected at the intraparotid portion in five cases, the intracanalicular portion in two, the cisternal portion in one, and the intratemporal portion in two. The segment most frequently involved was the mastoid (n=6), followed by the parotid (n=5), intracanalicular (n=4), cisternal (n=2), the labyrinthine/geniculate ganglion (n=2) and the tympanic segment (n=1). Tumors affected two segments of the facial nerve in eight cases, only one segment in one, and four continuous segments in one. Morphologically, tumors were ice-cream cone shaped in the cisternal segment tumor (1/1), cone shaped in intracanalicular tumors (2/2), oval shaped in geniculate ganglion tumors (1/1), club shaped in intraparotid tumors (5/5) and bead shaped in the diffuse-type tumor (1/1). Changes in adjacent bony structures involved widening of the stylomastoid foramen in intraparotid tumors (5/5), widening of the internal auditary canal in intracanalicular and cisternal tumors (3/3), bony erosion of the geniculate fossa in geniculate ganglion tumors (2/2), and widening of the facial nerve canal in intratemporal and intraparotid tumors (6/6). The characteristic location, shape and change in adjacent bony structures revealed by facial schwannomas on CT and MR examination lead to correct diagnosis.

  10. Intratemporal and extratemporal facial nerve schwannoma: CT and MRI findings

    International Nuclear Information System (INIS)

    Kim, Keum Won; Lee, Ho Kyu; Shin, Ji Hoon; Choi, Choong Gon; Suh, Dae Chul; Cheong, Hae Kwan

    2001-01-01

    To analyze the characteristics of CT and MRI findings of facial nerve schwannoma in ten patients. Ten patients with pathologically confirmed facial nerve schwannoma, underwent physical and radilolgic examination. The latter involved MRI in all ten and CT scanning in six. We analyzed the location (epicenter), extent and number of involved segments of tumors, tuumor morphology, and changes in adjacent bony structures. The major symptoms of facial nerve schwannoma were facial nerve paralysis in seven cases and hearing loss in six. Epicenters were detected at the intraparotid portion in five cases, the intracanalicular portion in two, the cisternal portion in one, and the intratemporal portion in two. The segment most frequently involved was the mastoid (n=6), followed by the parotid (n=5), intracanalicular (n=4), cisternal (n=2), the labyrinthine/geniculate ganglion (n=2) and the tympanic segment (n=1). Tumors affected two segments of the facial nerve in eight cases, only one segment in one, and four continuous segments in one. Morphologically, tumors were ice-cream cone shaped in the cisternal segment tumor (1/1), cone shaped in intracanalicular tumors (2/2), oval shaped in geniculate ganglion tumors (1/1), club shaped in intraparotid tumors (5/5) and bead shaped in the diffuse-type tumor (1/1). Changes in adjacent bony structures involved widening of the stylomastoid foramen in intraparotid tumors (5/5), widening of the internal auditary canal in intracanalicular and cisternal tumors (3/3), bony erosion of the geniculate fossa in geniculate ganglion tumors (2/2), and widening of the facial nerve canal in intratemporal and intraparotid tumors (6/6). The characteristic location, shape and change in adjacent bony structures revealed by facial schwannomas on CT and MR examination lead to correct diagnosis

  11. Management of peripheral facial nerve palsy

    OpenAIRE

    Finsterer, Josef

    2008-01-01

    Peripheral facial nerve palsy (FNP) may (secondary FNP) or may not have a detectable cause (Bell?s palsy). Three quarters of peripheral FNP are primary and one quarter secondary. The most prevalent causes of secondary FNP are systemic viral infections, trauma, surgery, diabetes, local infections, tumor, immunological disorders, or drugs. The diagnosis of FNP relies upon the presence of typical symptoms and signs, blood chemical investigations, cerebro-spinal-fluid-investigations, X-ray of the...

  12. Effect of postoperative brachytherapy and external beam radiotherapy on functional outcomes of immediate facial nerve repair after radical parotidectomy.

    Science.gov (United States)

    Hontanilla, Bernardo; Qiu, Shan-Shan; Marré, Diego

    2014-01-01

    There is much controversy regarding the effect of radiotherapy on facial nerve regeneration. However, the effect of brachytherapy has not been studied. Fifty-three patients underwent total parotidectomy of which 13 were radical with immediate facial nerve repair with sural nerve grafts. Six patients (group 1) did not receive adjuvant treatment whereas 7 patients (group 2) received postoperative brachytherapy plus radiotherapy. Functional outcomes were compared using Facial Clima. Mean percentage of blink recovery was 92.6 ± 4.2 for group 1 and 90.7 ± 5.2 for group 2 (p = .37). Mean percentage of commissural excursion restoration was 78.1 ± 3.5 for group 1 and 74.9 ± 5.9 for group 2 (p = .17). Mean time from surgery to first movement was 5.7 ± 0.9 months for group 1 and 6.3 ± 0.5 months for group 2 (p = .15). Brachytherapy plus radiotherapy does not affect the functional outcomes of immediate facial nerve repair with nerve grafts. Copyright © 2013 Wiley Periodicals, Inc.

  13. Amygdala and fusiform gyrus temporal dynamics: Responses to negative facial expressions

    Directory of Open Access Journals (Sweden)

    Rauch Scott L

    2008-05-01

    Full Text Available Abstract Background The amygdala habituates in response to repeated human facial expressions; however, it is unclear whether this brain region habituates to schematic faces (i.e., simple line drawings or caricatures of faces. Using an fMRI block design, 16 healthy participants passively viewed repeated presentations of schematic and human neutral and negative facial expressions. Percent signal changes within anatomic regions-of-interest (amygdala and fusiform gyrus were calculated to examine the temporal dynamics of neural response and any response differences based on face type. Results The amygdala and fusiform gyrus had a within-run "U" response pattern of activity to facial expression blocks. The initial block within each run elicited the greatest activation (relative to baseline and the final block elicited greater activation than the preceding block. No significant differences between schematic and human faces were detected in the amygdala or fusiform gyrus. Conclusion The "U" pattern of response in the amygdala and fusiform gyrus to facial expressions suggests an initial orienting, habituation, and activation recovery in these regions. Furthermore, this study is the first to directly compare brain responses to schematic and human facial expressions, and the similarity in brain responses suggest that schematic faces may be useful in studying amygdala activation.

  14. Evolution of the 3-dimensional video system for facial motion analysis: ten years' experiences and recent developments.

    Science.gov (United States)

    Tzou, Chieh-Han John; Pona, Igor; Placheta, Eva; Hold, Alina; Michaelidou, Maria; Artner, Nicole; Kropatsch, Walter; Gerber, Hans; Frey, Manfred

    2012-08-01

    Since the implementation of the computer-aided system for assessing facial palsy in 1999 by Frey et al (Plast Reconstr Surg. 1999;104:2032-2039), no similar system that can make an objective, three-dimensional, quantitative analysis of facial movements has been marketed. This system has been in routine use since its launch, and it has proven to be reliable, clinically applicable, and therapeutically accurate. With the cooperation of international partners, more than 200 patients were analyzed. Recent developments in computer vision--mostly in the area of generative face models, applying active--appearance models (and extensions), optical flow, and video-tracking-have been successfully incorporated to automate the prototype system. Further market-ready development and a business partner will be needed to enable the production of this system to enhance clinical methodology in diagnostic and prognostic accuracy as a personalized therapy concept, leading to better results and higher quality of life for patients with impaired facial function.

  15. Computer Aided Facial Prosthetics Manufacturing System

    Directory of Open Access Journals (Sweden)

    Peng H.K.

    2016-01-01

    Full Text Available Facial deformities can impose burden to the patient. There are many solutions for facial deformities such as plastic surgery and facial prosthetics. However, current fabrication method of facial prosthetics is high-cost and time consuming. This study aimed to identify a new method to construct a customized facial prosthetic. A 3D scanner, computer software and 3D printer were used in this study. Results showed that the new developed method can be used to produce a customized facial prosthetics. The advantages of the developed method over the conventional process are low cost, reduce waste of material and pollution in order to meet the green concept.

  16. Satisfaction with facial appearance and its determinants in adults with severe congenital facial disfigurement: a case-referent study.

    Science.gov (United States)

    Versnel, S L; Duivenvoorden, H J; Passchier, J; Mathijssen, I M J

    2010-10-01

    Patients with severe congenital facial disfigurement have a long track record of operations and hospital visits by the time they are 18 years old. The fact that their facial deformity is congenital may have an impact on how satisfied these patients are with their appearance. This study evaluated the level of satisfaction with facial appearance of congenital and of acquired facially disfigured adults, and explored demographic, physical and psychological determinants of this satisfaction. Differences compared with non-disfigured adults were examined. Fifty-nine adults with a rare facial cleft, 59 adults with a facial deformity traumatically acquired in adulthood, and a reference group of 201 non-disfigured adults completed standardised demographic, physical and psychological questionnaires. The congenital and acquired groups did not differ significantly in the level of satisfaction with facial appearance, but both were significantly less satisfied than the reference group. In facially disfigured adults, level of education, number of affected facial parts and facial function were determinants of the level of satisfaction. High fear of negative appearance evaluation by others (FNAE) and low self-esteem (SE) were strong psychological determinants. Although FNAE was higher in both patient groups, SE was similar in all three groups. Satisfaction with facial appearance of individuals with a congenital or acquired facial deformity is similar and will seldom reach the level of satisfaction of non-disfigured persons. A combination of surgical correction (with attention for facial profile and restoring facial functions) and psychological help (to increase SE and lower FNAE) may improve patient satisfaction. Copyright 2009 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  17. P2-28: An Amplification of Feedback from Facial Muscles Strengthened Sympathetic Activations to Emotional Facial Cues

    Directory of Open Access Journals (Sweden)

    Younbyoung Chae

    2012-10-01

    Full Text Available The facial feedback hypothesis suggests that feedback from cutaneous and muscular afferents influences our emotions during the control of facial expressions. Enhanced facial expressiveness is correlated with an increase in autonomic arousal, and self-reported emotional experience, while limited facial expression attenuates these responses. The present study was aimed at investigating the difference in emotional response in imitated versus observed facial expressions. For this, we measured the facial electromyogram of the corrugator muscle as well as the skin conductance response (SCR while participants were either imitating or simply observing emotional facial expressions. We found that participants produced significantly greater facial electromyogram activation during imitations compared to observations of angry faces. Similarly, they exhibited significantly greater SCR during imitations to angry faces compared to observations. An amplification of feedback from face muscles during imitation strengthened sympathetic activation to negative emotional cues. These findings suggest that manipulations of muscular feedback could modulate the bodily expression of emotion and perhaps also the emotional response itself.

  18. Exacerbation of Facial Motoneuron Loss after Facial Nerve Axotomy in CCR3-Deficient Mice

    Directory of Open Access Journals (Sweden)

    Derek A Wainwright

    2009-11-01

    Full Text Available We have previously demonstrated a neuroprotective mechanism of FMN (facial motoneuron survival after facial nerve axotomy that is dependent on CD4+ Th2 cell interaction with peripheral antigen-presenting cells, as well as CNS (central nervous system-resident microglia. PACAP (pituitary adenylate cyclase-activating polypeptide is expressed by injured FMN and increases Th2-associated chemokine expression in cultured murine microglia. Collectively, these results suggest a model involving CD4+ Th2 cell migration to the facial motor nucleus after injury via microglial expression of Th2-associated chemokines. However, to respond to Th2-associated chemokines, Th2 cells must express the appropriate Th2-associated chemokine receptors. In the present study, we tested the hypothesis that Th2-associated chemokine receptors increase in the facial motor nucleus after facial nerve axotomy at timepoints consistent with significant T-cell infiltration. Microarray analysis of Th2-associated chemokine receptors was followed up with real-time PCR for CCR3, which indicated that facial nerve injury increases CCR3 mRNA levels in mouse facial motor nucleus. Unexpectedly, quantitative- and co-immunofluorescence revealed increased CCR3 expression localizing to FMN in the facial motor nucleus after facial nerve axotomy. Compared with WT (wild-type, a significant decrease in FMN survival 4 weeks after axotomy was observed in CCR3–/– mice. Additionally, compared with WT, a significant decrease in FMN survival 4 weeks after axotomy was observed in Rag2 –/– (recombination activating gene-2-deficient mice adoptively transferred CD4+ T-cells isolated from CCR3–/– mice, but not in CCR3–/– mice adoptively transferred CD4+ T-cells derived from WT mice. These results provide a basis for further investigation into the co-operation between CD4+ T-cell- and CCR3-mediated neuroprotection after FMN injury.

  19. Case analysis of temporal bone lesions with facial paralysis as main manifestation and literature review.

    Science.gov (United States)

    Chen, Wen-Jing; Ye, Jing-Ying; Li, Xin; Xu, Jia; Yi, Hai-Jin

    2017-08-23

    This study aims to discuss clinical characteristics, image manifestation and treatment methods of temporal bone lesions with facial paralysis as the main manifestation for deepening the understanding of such type of lesions and reducing erroneous and missed diagnosis. The clinical data of 16 patients with temporal bone lesions and facial paralysis as main manifestation, who were diagnosed and treated from 2009 to 2016, were retrospectively analyzed. Among these patients, six patients had congenital petrous bone cholesteatoma (PBC), nine patients had facial nerve schwannoma, and one patient had facial nerve hemangioma. All the patients had an experience of long-term erroneous diagnosis. The lesions were completely excised by surgery. PBC and primary facial nerve tumors were pathologically confirmed. Facial-hypoglossal nerve anastomosis was performed on two patients. HB grade VI was recovered to HB grade V in one patient. The anastomosis failed due to severe facial nerve fibrosis in one patient. Hence, HB remained at grade VI. Postoperative recovery was good for all patients. No lesion recurrence was observed after 1-6 years of follow-up. For the patients with progressive or complete facial paralysis, imaging examination should be perfected in a timely manner. Furthermore, PBC, primary facial nerve tumors and other temporal bone space-occupying lesions should be eliminated. Lesions should be timely detected and proper intervention should be conducted, in order to reduce operation difficulty and complications, and increase the opportunity of facial nerve function reconstruction.

  20. Detecting the movement and spawning activity of bigheaded carps with environmental DNA

    Science.gov (United States)

    Erickson, Richard A.; Rees, Christopher B.; Coulter, Alison A.; Merkes, Christopher; McCalla, S. Grace; Touzinsky, Katherine F; Walleser, Liza R.; Goforth, Reuben R.; Amberg, Jon J.

    2016-01-01

    Bigheaded carps are invasive fishes threatening to invade the Great Lakes basin and establish spawning populations, and have been monitored using environmental DNA (eDNA). Not only does eDNA hold potential for detecting the presence of species, but may also allow for quantitative comparisons like relative abundance of species across time or space. We examined the relationships among bigheaded carp movement, hydrography, spawning and eDNA on the Wabash River, IN, USA. We found positive relationships between eDNA and movement and eDNA and hydrography. We did not find a relationship between eDNA and spawning activity in the form of drifting eggs. Our first finding demonstrates how eDNA may be used to monitor species abundance, whereas our second finding illustrates the need for additional research into eDNA methodologies. Current applications of eDNA are widespread, but the relatively new technology requires further refinement.

  1. Mensuração da evolução terapêutica com paquímetro digital na Paralisia Facial Periférica de Bell Measurement of evolution therapy using a digital caliper in Palsy Bell

    Directory of Open Access Journals (Sweden)

    Claudia Hosana da Maceno Salvador

    2012-01-01

    Full Text Available OBJETIVO: avaliar o uso do paquímetro digital na mensuração dos movimentos da mímica facial em diferentes momentos do tratamento fonoaudiológico. MÉTODO: estudo longitudinal prospectivo, em 20 sujeitos com idade entre 07 e 70 anos, sendo 13 do genero feminino e 07 masculino, com diagnóstico de paralisia facial periférica de Bell, atendidos no Ambulatório de Paralisia Facial, da disciplina de otorrinolaringologia de um Hospital Público Universitário. Neste estudo foi adotado o uso de um medidor paquímetro digital da marca Digimess 100.174BL, instrumento com resolução de 0,00mm/152,78mm. As medições foram realizadas no movimento da mímica facial, sempre partindo de um ponto fixo para o ponto móvel nas estruturas: tragus e comissura labial, canto externo do olho e comissura labial e também canto interno do olho e asa do nariz, sendo realizadas pré e pós tratamento fonoaudiológico. A quantificação da incompetência do movimento foi mensurada por meio de porcentagem simples. Foi aplicado teste dos Postos Sinalizados de Wilcoxon, para verificar possíveis diferenças entre ambos os momentos considerados (com e sem movimentos, como as variáveis de interesse. RESULTADOS: as mensurações tiveram um resultado estatisticamente significante (pPURPOSE: to assess the use of the digital caliper in the measurement of the facial mimic movements in different moments of the speech therapy. METHOD: prospective longitudinal study, with 20 subjects between 7 and 70 years-old, 13 females and 7 males, all diagnosed with Bell’s Palsy, attended in the Facial Paralysis Ambulatory, of the otorhinolaryngology subject of a University Public Hospital. The use of a Digimess 100,174BL digital measuring caliper was adopted for this study. The measurements were carried out in the facial mimic movement, always starting from a fixed point to a mobile point in the structures: the tragus and the labial commissure, external corner of the eye and labial

  2. The MPI Facial Expression Database — A Validated Database of Emotional and Conversational Facial Expressions

    Science.gov (United States)

    Kaulard, Kathrin; Cunningham, Douglas W.; Bülthoff, Heinrich H.; Wallraven, Christian

    2012-01-01

    The ability to communicate is one of the core aspects of human life. For this, we use not only verbal but also nonverbal signals of remarkable complexity. Among the latter, facial expressions belong to the most important information channels. Despite the large variety of facial expressions we use in daily life, research on facial expressions has so far mostly focused on the emotional aspect. Consequently, most databases of facial expressions available to the research community also include only emotional expressions, neglecting the largely unexplored aspect of conversational expressions. To fill this gap, we present the MPI facial expression database, which contains a large variety of natural emotional and conversational expressions. The database contains 55 different facial expressions performed by 19 German participants. Expressions were elicited with the help of a method-acting protocol, which guarantees both well-defined and natural facial expressions. The method-acting protocol was based on every-day scenarios, which are used to define the necessary context information for each expression. All facial expressions are available in three repetitions, in two intensities, as well as from three different camera angles. A detailed frame annotation is provided, from which a dynamic and a static version of the database have been created. In addition to describing the database in detail, we also present the results of an experiment with two conditions that serve to validate the context scenarios as well as the naturalness and recognizability of the video sequences. Our results provide clear evidence that conversational expressions can be recognized surprisingly well from visual information alone. The MPI facial expression database will enable researchers from different research fields (including the perceptual and cognitive sciences, but also affective computing, as well as computer vision) to investigate the processing of a wider range of natural facial expressions

  3. Neurinomas of the facial nerve extending to the middle cranial fossa

    International Nuclear Information System (INIS)

    Ichikawa, Akimichi; Tanaka, Ryuichi; Matsumura, Kenichiro; Takeda, Norio; Ishii, Ryoji; Ito, Jusuke.

    1986-01-01

    Three cases with neurinomas of the facial nerve are reported, especially with regard to the computerized tomographic (CT) findings. All of them had a long history of facial-nerve dysfunction, associated with hearing loss over periods from several to twenty-five years. Intraoperative findings demonstrated that these tumors arose from the intrapetrous portion, the horizontal portion, or the geniculate portion of the facial nerve and that they were located in the middle cranial fossa. The histological diagnoses were neurinomas. CT scans of three cases demonstrated round and low-density masses with marginal high-density areas in the middle cranial fossa, in one associated with diffuse low-density areas in the left temporal and parietal lobes. The low-density areas on CT were thought to be cysts; this was confirmed by surgery. Enhanced CT scans showed irregular enhancement in one case and ring-like enhancement in two cases. High-resolution CT scans of the temporal bone in two cases revealed a soft tissue mass in the middle ear, a well-circumscribed irregular destruction of the anterior aspect of the petrous bone, and calcifications. These findings seemed to be significant features of the neurinomas of the facial nerve extending to the middle cranial fossa. We emphasize that bone-window CT of the temporal bone is most useful in detecting a neurinoma of the facial nerve in its early stage in order to preserve the facial- and acoustic-nerve functions. (author)

  4. Differences in Movement Pattern and Detectability between Males and Females Influence How Common Sampling Methods Estimate Sex Ratio.

    Directory of Open Access Journals (Sweden)

    João Fabrício Mota Rodrigues

    Full Text Available Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population's sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns.

  5. Differences in Movement Pattern and Detectability between Males and Females Influence How Common Sampling Methods Estimate Sex Ratio.

    Science.gov (United States)

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco

    2016-01-01

    Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population's sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns.

  6. In-the-wild facial expression recognition in extreme poses

    Science.gov (United States)

    Yang, Fei; Zhang, Qian; Zheng, Chi; Qiu, Guoping

    2018-04-01

    In the computer research area, facial expression recognition is a hot research problem. Recent years, the research has moved from the lab environment to in-the-wild circumstances. It is challenging, especially under extreme poses. But current expression detection systems are trying to avoid the pose effects and gain the general applicable ability. In this work, we solve the problem in the opposite approach. We consider the head poses and detect the expressions within special head poses. Our work includes two parts: detect the head pose and group it into one pre-defined head pose class; do facial expression recognize within each pose class. Our experiments show that the recognition results with pose class grouping are much better than that of direct recognition without considering poses. We combine the hand-crafted features, SIFT, LBP and geometric feature, with deep learning feature as the representation of the expressions. The handcrafted features are added into the deep learning framework along with the high level deep learning features. As a comparison, we implement SVM and random forest to as the prediction models. To train and test our methodology, we labeled the face dataset with 6 basic expressions.

  7. Facial Nerve Paralysis due to a Pleomorphic Adenoma with the Imaging Characteristics of a Facial Nerve Schwannoma

    OpenAIRE

    Nader, Marc-Elie; Bell, Diana; Sturgis, Erich M.; Ginsberg, Lawrence E.; Gidley, Paul W.

    2014-01-01

    Background Facial nerve paralysis in a patient with a salivary gland mass usually denotes malignancy. However, facial paralysis can also be caused by benign salivary gland tumors. Methods We present a case of facial nerve paralysis due to a benign salivary gland tumor that had the imaging characteristics of an intraparotid facial nerve schwannoma. Results The patient presented to our clinic 4 years after the onset of facial nerve paralysis initially diagnosed as Bell palsy. Computed tomograph...

  8. Microcontroller based driver alertness detection systems to detect drowsiness

    Science.gov (United States)

    Adenin, Hasibah; Zahari, Rahimi; Lim, Tiong Hoo

    2018-04-01

    The advancement of embedded system for detecting and preventing drowsiness in a vehicle is a major challenge for road traffic accident systems. To prevent drowsiness while driving, it is necessary to have an alert system that can detect a decline in driver concentration and send a signal to the driver. Studies have shown that traffc accidents usually occur when the driver is distracted while driving. In this paper, we have reviewed a number of detection systems to monitor the concentration of a car driver and propose a portable Driver Alertness Detection System (DADS) to determine the level of concentration of the driver based on pixelated coloration detection technique using facial recognition. A portable camera will be placed at the front visor to capture facial expression and the eye activities. We evaluate DADS using 26 participants and have achieved 100% detection rate with good lighting condition and a low detection rate at night.

  9. Detecting leaf pulvinar movements on NDVI time series of desert trees: a new approach for water stress detection.

    Directory of Open Access Journals (Sweden)

    Roberto O Chávez

    Full Text Available Heliotropic leaf movement or leaf 'solar tracking' occurs for a wide variety of plants, including many desert species and some crops. This has an important effect on the canopy spectral reflectance as measured from satellites. For this reason, monitoring systems based on spectral vegetation indices, such as the normalized difference vegetation index (NDVI, should account for heliotropic movements when evaluating the health condition of such species. In the hyper-arid Atacama Desert, Northern Chile, we studied seasonal and diurnal variations of MODIS and Landsat NDVI time series of plantation stands of the endemic species Prosopis tamarugo Phil., subject to different levels of groundwater depletion. As solar irradiation increased during the day and also during the summer, the paraheliotropic leaves of Tamarugo moved to an erectophile position (parallel to the sun rays making the NDVI signal to drop. This way, Tamarugo stands with no water stress showed a positive NDVI difference between morning and midday (ΔNDVI mo-mi and between winter and summer (ΔNDVI W-S. In this paper, we showed that the ΔNDVI mo-mi of Tamarugo stands can be detected using MODIS Terra and Aqua images, and the ΔNDVI W-S using Landsat or MODIS Terra images. Because pulvinar movement is triggered by changes in cell turgor, the effects of water stress caused by groundwater depletion can be assessed and monitored using ΔNDVI mo-mi and ΔNDVI W-S. For an 11-year time series without rainfall events, Landsat ΔNDVI W-S of Tamarugo stands showed a positive linear relationship with cumulative groundwater depletion. We conclude that both ΔNDVI mo-mi and ΔNDVI W-S have potential to detect early water stress of paraheliotropic vegetation.

  10. Detecting leaf pulvinar movements on NDVI time series of desert trees: a new approach for water stress detection.

    Science.gov (United States)

    Chávez, Roberto O; Clevers, Jan G P W; Verbesselt, Jan; Naulin, Paulette I; Herold, Martin

    2014-01-01

    Heliotropic leaf movement or leaf 'solar tracking' occurs for a wide variety of plants, including many desert species and some crops. This has an important effect on the canopy spectral reflectance as measured from satellites. For this reason, monitoring systems based on spectral vegetation indices, such as the normalized difference vegetation index (NDVI), should account for heliotropic movements when evaluating the health condition of such species. In the hyper-arid Atacama Desert, Northern Chile, we studied seasonal and diurnal variations of MODIS and Landsat NDVI time series of plantation stands of the endemic species Prosopis tamarugo Phil., subject to different levels of groundwater depletion. As solar irradiation increased during the day and also during the summer, the paraheliotropic leaves of Tamarugo moved to an erectophile position (parallel to the sun rays) making the NDVI signal to drop. This way, Tamarugo stands with no water stress showed a positive NDVI difference between morning and midday (ΔNDVI mo-mi) and between winter and summer (ΔNDVI W-S). In this paper, we showed that the ΔNDVI mo-mi of Tamarugo stands can be detected using MODIS Terra and Aqua images, and the ΔNDVI W-S using Landsat or MODIS Terra images. Because pulvinar movement is triggered by changes in cell turgor, the effects of water stress caused by groundwater depletion can be assessed and monitored using ΔNDVI mo-mi and ΔNDVI W-S. For an 11-year time series without rainfall events, Landsat ΔNDVI W-S of Tamarugo stands showed a positive linear relationship with cumulative groundwater depletion. We conclude that both ΔNDVI mo-mi and ΔNDVI W-S have potential to detect early water stress of paraheliotropic vegetation.

  11. Influence of multi-microphone signal enhancement algorithms on auditory movement detection in acoustically complex situations

    DEFF Research Database (Denmark)

    Lundbeck, Micha; Hartog, Laura; Grimm, Giso

    2017-01-01

    The influence of hearing aid (HA) signal processing on the perception of spatially dynamic sounds has not been systematically investigated so far. Previously, we observed that interfering sounds impaired the detectability of left-right source movements and reverberation that of near-far source...... movements for elderly hearing-impaired (EHI) listeners (Lundbeck et al., 2017). Here, we explored potential ways of improving these deficits with HAs. To that end, we carried out acoustic analyses to examine the impact of two beamforming algorithms and a binaural coherence-based noise reduction scheme...... on the cues underlying movement perception. While binaural cues remained mostly unchanged, there were greater monaural spectral changes and increases in signal-to-noise ratio and direct-to-reverberant sound ratio as a result of the applied processing. Based on these findings, we conducted a listening test...

  12. Deficits in Degraded Facial Affect Labeling in Schizophrenia and Borderline Personality Disorder.

    Science.gov (United States)

    van Dijke, Annemiek; van 't Wout, Mascha; Ford, Julian D; Aleman, André

    2016-01-01

    Although deficits in facial affect processing have been reported in schizophrenia as well as in borderline personality disorder (BPD), these disorders have not yet been directly compared on facial affect labeling. Using degraded stimuli portraying neutral, angry, fearful and angry facial expressions, we hypothesized more errors in labeling negative facial expressions in patients with schizophrenia compared to healthy controls. Patients with BPD were expected to have difficulty in labeling neutral expressions and to display a bias towards a negative attribution when wrongly labeling neutral faces. Patients with schizophrenia (N = 57) and patients with BPD (N = 30) were compared to patients with somatoform disorder (SoD, a psychiatric control group; N = 25) and healthy control participants (N = 41) on facial affect labeling accuracy and type of misattributions. Patients with schizophrenia showed deficits in labeling angry and fearful expressions compared to the healthy control group and patients with BPD showed deficits in labeling neutral expressions compared to the healthy control group. Schizophrenia and BPD patients did not differ significantly from each other when labeling any of the facial expressions. Compared to SoD patients, schizophrenia patients showed deficits on fearful expressions, but BPD did not significantly differ from SoD patients on any of the facial expressions. With respect to the type of misattributions, BPD patients mistook neutral expressions more often for fearful expressions compared to schizophrenia patients and healthy controls, and less often for happy compared to schizophrenia patients. These findings suggest that although schizophrenia and BPD patients demonstrate different as well as similar facial affect labeling deficits, BPD may be associated with a tendency to detect negative affect in neutral expressions.

  13. Deficits in Degraded Facial Affect Labeling in Schizophrenia and Borderline Personality Disorder.

    Directory of Open Access Journals (Sweden)

    Annemiek van Dijke

    Full Text Available Although deficits in facial affect processing have been reported in schizophrenia as well as in borderline personality disorder (BPD, these disorders have not yet been directly compared on facial affect labeling. Using degraded stimuli portraying neutral, angry, fearful and angry facial expressions, we hypothesized more errors in labeling negative facial expressions in patients with schizophrenia compared to healthy controls. Patients with BPD were expected to have difficulty in labeling neutral expressions and to display a bias towards a negative attribution when wrongly labeling neutral faces. Patients with schizophrenia (N = 57 and patients with BPD (N = 30 were compared to patients with somatoform disorder (SoD, a psychiatric control group; N = 25 and healthy control participants (N = 41 on facial affect labeling accuracy and type of misattributions. Patients with schizophrenia showed deficits in labeling angry and fearful expressions compared to the healthy control group and patients with BPD showed deficits in labeling neutral expressions compared to the healthy control group. Schizophrenia and BPD patients did not differ significantly from each other when labeling any of the facial expressions. Compared to SoD patients, schizophrenia patients showed deficits on fearful expressions, but BPD did not significantly differ from SoD patients on any of the facial expressions. With respect to the type of misattributions, BPD patients mistook neutral expressions more often for fearful expressions compared to schizophrenia patients and healthy controls, and less often for happy compared to schizophrenia patients. These findings suggest that although schizophrenia and BPD patients demonstrate different as well as similar facial affect labeling deficits, BPD may be associated with a tendency to detect negative affect in neutral expressions.

  14. [Prosopagnosia and facial expression recognition].

    Science.gov (United States)

    Koyama, Shinichi

    2014-04-01

    This paper reviews clinical neuropsychological studies that have indicated that the recognition of a person's identity and the recognition of facial expressions are processed by different cortical and subcortical areas of the brain. The fusiform gyrus, especially the right fusiform gyrus, plays an important role in the recognition of identity. The superior temporal sulcus, amygdala, and medial frontal cortex play important roles in facial-expression recognition. Both facial recognition and facial-expression recognition are highly intellectual processes that involve several regions of the brain.

  15. MR findings of facial nerve on oblique sagittal MRI using TMJ surface coil: normal vs peripheral facial nerve palsy

    International Nuclear Information System (INIS)

    Park, Yong Ok; Lee, Myeong Jun; Lee, Chang Joon; Yoo, Jeong Hyun

    2000-01-01

    To evaluate the findings of normal facial nerve, as seen on oblique sagittal MRI using a TMJ (temporomandibular joint) surface coil, and then to evaluate abnormal findings of peripheral facial nerve palsy. We retrospectively reviewed the MR findings of 20 patients with peripheral facial palsy and 50 normal facial nerves of 36 patients without facial palsy. All underwent oblique sagittal MRI using a T MJ surface coil. We analyzed the course, signal intensity, thickness, location, and degree of enhancement of the facial nerve. According to the angle made by the proximal parotid segment on the axis of the mastoid segment, course was classified as anterior angulation (obtuse and acute, or buckling), straight and posterior angulation. Among 50 normal facial nerves, 24 (48%) were straight, and 23 (46%) demonstrated anterior angulation; 34 (68%) showed iso signal intensity on T1W1. In the group of patients, course on the affected side was either straight (40%) or showed anterior angulation (55%), and signal intensity in 80% of cases was isointense. These findings were similar to those in the normal group, but in patients with post-traumatic or post-operative facial palsy, buckling, of course, appeared. In 12 of 18 facial palsy cases (66.6%) in which contrast materials were administered, a normal facial nerve of the opposite facial canal showed mild enhancement on more than one segment, but on the affected side the facial nerve showed diffuse enhancement in all 14 patients with acute facial palsy. Eleven of these (79%) showed fair or marked enhancement on more than one segment, and in 12 (86%), mild enhancement of the proximal parotid segment was noted. Four of six chronic facial palsy cases (66.6%) showed atrophy of the facial nerve. When oblique sagittal MR images are obtained using a TMJ surface coil, enhancement of the proximal parotid segment of the facial nerve and fair or marked enhancement of at least one segment within the facial canal always suggests pathology of

  16. Facial neuroma masquerading as acoustic neuroma.

    Science.gov (United States)

    Sayegh, Eli T; Kaur, Gurvinder; Ivan, Michael E; Bloch, Orin; Cheung, Steven W; Parsa, Andrew T

    2014-10-01

    Facial nerve neuromas are rare benign tumors that may be initially misdiagnosed as acoustic neuromas when situated near the auditory apparatus. We describe a patient with a large cystic tumor with associated trigeminal, facial, audiovestibular, and brainstem dysfunction, which was suspicious for acoustic neuroma on preoperative neuroimaging. Intraoperative investigation revealed a facial nerve neuroma located in the cerebellopontine angle and internal acoustic canal. Gross total resection of the tumor via retrosigmoid craniotomy was curative. Transection of the facial nerve necessitated facial reanimation 4 months later via hypoglossal-facial cross-anastomosis. Clinicians should recognize the natural history, diagnostic approach, and management of this unusual and mimetic lesion. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Persistent facial pain conditions

    DEFF Research Database (Denmark)

    Forssell, Heli; Alstergren, Per; Bakke, Merete

    2016-01-01

    Persistent facial pains, especially temporomandibular disorders (TMD), are common conditions. As dentists are responsible for the treatment of most of these disorders, up-to date knowledge on the latest advances in the field is essential for successful diagnosis and management. The review covers...... TMD, and different neuropathic or putative neuropathic facial pains such as persistent idiopathic facial pain and atypical odontalgia, trigeminal neuralgia and painful posttraumatic trigeminal neuropathy. The article presents an overview of TMD pain as a biopsychosocial condition, its prevalence......, clinical features, consequences, central and peripheral mechanisms, diagnostic criteria (DC/TMD), and principles of management. For each of the neuropathic facial pain entities, the definitions, prevalence, clinical features, and diagnostics are described. The current understanding of the pathophysiology...

  18. Computed tomography in facial trauma

    International Nuclear Information System (INIS)

    Zilkha, A.

    1982-01-01

    Computed tomography (CT), plain radiography, and conventional tomography were performed on 30 patients with facial trauma. CT demonstrated bone and soft-tissue involvement. In all cases, CT was superior to tomography in the assessment of facial injury. It is suggested that CT follow plain radiography in the evaluation of facial trauma

  19. Head movements and postures as pain behavior

    Science.gov (United States)

    Al-Hamadi, Ayoub; Limbrecht-Ecklundt, Kerstin; Walter, Steffen; Traue, Harald C.

    2018-01-01

    Pain assessment can benefit from observation of pain behaviors, such as guarding or facial expression, and observational pain scales are widely used in clinical practice with nonverbal patients. However, little is known about head movements and postures in the context of pain. In this regard, we analyze videos of three publically available datasets. The BioVid dataset was recorded with healthy participants subjected to painful heat stimuli. In the BP4D dataset, healthy participants performed a cold-pressor test and several other tasks (meant to elicit emotion). The UNBC dataset videos show shoulder pain patients during range-of-motion tests to their affected and unaffected limbs. In all videos, participants were sitting in an upright position. We studied head movements and postures that occurred during the painful and control trials by measuring head orientation from video over time, followed by analyzing posture and movement summary statistics and occurrence frequencies of typical postures and movements. We found significant differences between pain and control trials with analyses of variance and binomial tests. In BioVid and BP4D, pain was accompanied by head movements and postures that tend to be oriented downwards or towards the pain site. We also found differences in movement range and speed in all three datasets. The results suggest that head movements and postures should be considered for pain assessment and research. As additional pain indicators, they possibly might improve pain management whenever behavior is assessed, especially in nonverbal individuals such as infants or patients with dementia. However, in advance more research is needed to identify specific head movements and postures in pain patients. PMID:29444153

  20. Facial transplantation surgery introduction.

    Science.gov (United States)

    Eun, Seok-Chan

    2015-06-01

    Severely disfiguring facial injuries can have a devastating impact on the patient's quality of life. During the past decade, vascularized facial allotransplantation has progressed from an experimental possibility to a clinical reality in the fields of disease, trauma, and congenital malformations. This technique may now be considered a viable option for repairing complex craniofacial defects for which the results of autologous reconstruction remain suboptimal. Vascularized facial allotransplantation permits optimal anatomical reconstruction and provides desired functional, esthetic, and psychosocial benefits that are far superior to those achieved with conventional methods. Along with dramatic improvements in their functional statuses, patients regain the ability to make facial expressions such as smiling and to perform various functions such as smelling, eating, drinking, and speaking. The ideas in the 1997 movie "Face/Off" have now been realized in the clinical field. The objective of this article is to introduce this new surgical field, provide a basis for examining the status of the field of face transplantation, and stimulate and enhance facial transplantation studies in Korea.

  1. Facial reanimation with gracilis muscle transfer neurotized to cross-facial nerve graft versus masseteric nerve: a comparative study using the FACIAL CLIMA evaluating system.

    Science.gov (United States)

    Hontanilla, Bernardo; Marre, Diego; Cabello, Alvaro

    2013-06-01

    Longstanding unilateral facial paralysis is best addressed with microneurovascular muscle transplantation. Neurotization can be obtained from the cross-facial or the masseter nerve. The authors present a quantitative comparison of both procedures using the FACIAL CLIMA system. Forty-seven patients with complete unilateral facial paralysis underwent reanimation with a free gracilis transplant neurotized to either a cross-facial nerve graft (group I, n=20) or to the ipsilateral masseteric nerve (group II, n=27). Commissural displacement and commissural contraction velocity were measured using the FACIAL CLIMA system. Postoperative intragroup commissural displacement and commissural contraction velocity means of the reanimated versus the normal side were first compared using the independent samples t test. Mean percentage of recovery of both parameters were compared between the groups using the independent samples t test. Significant differences of mean commissural displacement and commissural contraction velocity between the reanimated side and the normal side were observed in group I (p=0.001 and p=0.014, respectively) but not in group II. Intergroup comparisons showed that both commissural displacement and commissural contraction velocity were higher in group II, with significant differences for commissural displacement (p=0.048). Mean percentage of recovery of both parameters was higher in group II, with significant differences for commissural displacement (p=0.042). Free gracilis muscle transfer neurotized by the masseteric nerve is a reliable technique for reanimation of longstanding facial paralysis. Compared with cross-facial nerve graft neurotization, this technique provides better symmetry and a higher degree of recovery. Therapeutic, III.

  2. Visual Scanning Patterns and Executive Function in Relation to Facial Emotion Recognition in Aging

    Science.gov (United States)

    Circelli, Karishma S.; Clark, Uraina S.; Cronin-Golomb, Alice

    2012-01-01

    Objective The ability to perceive facial emotion varies with age. Relative to younger adults (YA), older adults (OA) are less accurate at identifying fear, anger, and sadness, and more accurate at identifying disgust. Because different emotions are conveyed by different parts of the face, changes in visual scanning patterns may account for age-related variability. We investigated the relation between scanning patterns and recognition of facial emotions. Additionally, as frontal-lobe changes with age may affect scanning patterns and emotion recognition, we examined correlations between scanning parameters and performance on executive function tests. Methods We recorded eye movements from 16 OA (mean age 68.9) and 16 YA (mean age 19.2) while they categorized facial expressions and non-face control images (landscapes), and administered standard tests of executive function. Results OA were less accurate than YA at identifying fear (precognition of sad expressions and with scanning patterns for fearful, sad, and surprised expressions. Conclusion We report significant age-related differences in visual scanning that are specific to faces. The observed relation between scanning patterns and executive function supports the hypothesis that frontal-lobe changes with age may underlie some changes in emotion recognition. PMID:22616800

  3. The role of visual experience in the production of emotional facial expressions by blind people: a review.

    Science.gov (United States)

    Valente, Dannyelle; Theurel, Anne; Gentaz, Edouard

    2018-04-01

    Facial expressions of emotion are nonverbal behaviors that allow us to interact efficiently in social life and respond to events affecting our welfare. This article reviews 21 studies, published between 1932 and 2015, examining the production of facial expressions of emotion by blind people. It particularly discusses the impact of visual experience on the development of this behavior from birth to adulthood. After a discussion of three methodological considerations, the review of studies reveals that blind subjects demonstrate differing capacities for producing spontaneous expressions and voluntarily posed expressions. Seventeen studies provided evidence that blind and sighted spontaneously produce the same pattern of facial expressions, even if some variations can be found, reflecting facial and body movements specific to blindness or differences in intensity and control of emotions in some specific contexts. This suggests that lack of visual experience seems to not have a major impact when this behavior is generated spontaneously in real emotional contexts. In contrast, eight studies examining voluntary expressions indicate that blind individuals have difficulty posing emotional expressions. The opportunity for prior visual observation seems to affect performance in this case. Finally, we discuss three new directions for research to provide additional and strong evidence for the debate regarding the innate or the culture-constant learning character of the production of emotional facial expressions by blind individuals: the link between perception and production of facial expressions, the impact of display rules in the absence of vision, and the role of other channels in expression of emotions in the context of blindness.

  4. Are facial injuries really different? An observational cohort study comparing appearance concern and psychological distress in facial trauma and non-facial trauma patients.

    Science.gov (United States)

    Rahtz, Emmylou; Bhui, Kamaldeep; Hutchison, Iain; Korszun, Ania

    2018-01-01

    Facial injuries are widely assumed to lead to stigma and significant psychosocial burden. Experimental studies of face perception support this idea, but there is very little empirical evidence to guide treatment. This study sought to address the gap. Data were collected from 193 patients admitted to hospital following facial or other trauma. Ninety (90) participants were successfully followed up 8 months later. Participants completed measures of appearance concern and psychological distress (post-traumatic stress symptoms (PTSS), depressive symptoms, anxiety symptoms). Participants were classified by site of injury (facial or non-facial injury). The overall levels of appearance concern were comparable to those of the general population, and there was no evidence of more appearance concern among people with facial injuries. Women and younger people were significantly more likely to experience appearance concern at baseline. Baseline and 8-month psychological distress, although common in the sample, did not differ according to the site of injury. Changes in appearance concern were, however, strongly associated with psychological distress at follow-up. We conclude that although appearance concern is severe among some people with facial injury, it is not especially different to those with non-facial injuries or the general public; changes in appearance concern, however, appear to correlate with psychological distress. We therefore suggest that interventions might focus on those with heightened appearance concern and should target cognitive bias and psychological distress. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  5. Support vector machine-based facial-expression recognition method combining shape and appearance

    Science.gov (United States)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  6. Lonely adolescents exhibit heightened sensitivity for facial cues of emotion.

    Science.gov (United States)

    Vanhalst, Janne; Gibb, Brandon E; Prinstein, Mitchell J

    2017-02-01

    Contradicting evidence exists regarding the link between loneliness and sensitivity to facial cues of emotion, as loneliness has been related to better but also to worse performance on facial emotion recognition tasks. This study aims to contribute to this debate and extends previous work by (a) focusing on both accuracy and sensitivity to detecting positive and negative expressions, (b) controlling for depressive symptoms and social anxiety, and (c) using an advanced emotion recognition task with videos of neutral adolescent faces gradually morphing into full-intensity expressions. Participants were 170 adolescents (49% boys; M age  = 13.65 years) from rural, low-income schools. Results showed that loneliness was associated with increased sensitivity to happy, sad, and fear faces. When controlling for depressive symptoms and social anxiety, loneliness remained significantly associated with sensitivity to sad and fear faces. Together, these results suggest that lonely adolescents are vigilant to negative facial cues of emotion.

  7. Air movement and perceived air quality

    DEFF Research Database (Denmark)

    Melikov, Arsen Krikor; Kaczmarczyk, J.

    2012-01-01

    The impact of air movement on perceived air quality (PAQ) and sick building syndrome (SBS) symptoms was studied. In total, 124 human subjects participated in four series of experiments performed in climate chambers at different combinations of room air temperature (20, 23, 26 and 28 °C), relative...... and the humidity of the room air. At a low humidity level of 30% an increased velocity could compensate for the decrease in perceived air quality due to an elevated temperature ranging from 20 °C to 26 °C. In a room with 26 °C, increased air movement was also able to compensate for an increase in humidity from 30...... humidity (30, 40 and 70%) and pollution level (low and high). Most of the experiments were performed with and without facially applied airflow at elevated velocity. The importance of the use of recirculated room air and clean, cool and dry outdoor air was studied. The exposures ranged from 60. min to 235...

  8. Misrecognition of facial expressions in delinquents

    Directory of Open Access Journals (Sweden)

    Matsuura Naomi

    2009-09-01

    Full Text Available Abstract Background Previous reports have suggested impairment in facial expression recognition in delinquents, but controversy remains with respect to how such recognition is impaired. To address this issue, we investigated facial expression recognition in delinquents in detail. Methods We tested 24 male adolescent/young adult delinquents incarcerated in correctional facilities. We compared their performances with those of 24 age- and gender-matched control participants. Using standard photographs of facial expressions illustrating six basic emotions, participants matched each emotional facial expression with an appropriate verbal label. Results Delinquents were less accurate in the recognition of facial expressions that conveyed disgust than were control participants. The delinquents misrecognized the facial expressions of disgust as anger more frequently than did controls. Conclusion These results suggest that one of the underpinnings of delinquency might be impaired recognition of emotional facial expressions, with a specific bias toward interpreting disgusted expressions as hostile angry expressions.

  9. Peripheral facial palsy in children.

    Science.gov (United States)

    Yılmaz, Unsal; Cubukçu, Duygu; Yılmaz, Tuba Sevim; Akıncı, Gülçin; Ozcan, Muazzez; Güzel, Orkide

    2014-11-01

    The aim of this study is to evaluate the types and clinical characteristics of peripheral facial palsy in children. The hospital charts of children diagnosed with peripheral facial palsy were reviewed retrospectively. A total of 81 children (42 female and 39 male) with a mean age of 9.2 ± 4.3 years were included in the study. Causes of facial palsy were 65 (80.2%) idiopathic (Bell palsy) facial palsy, 9 (11.1%) otitis media/mastoiditis, and tumor, trauma, congenital facial palsy, chickenpox, Melkersson-Rosenthal syndrome, enlarged lymph nodes, and familial Mediterranean fever (each 1; 1.2%). Five (6.1%) patients had recurrent attacks. In patients with Bell palsy, female/male and right/left ratios were 36/29 and 35/30, respectively. Of them, 31 (47.7%) had a history of preceding infection. The overall rate of complete recovery was 98.4%. A wide variety of disorders can present with peripheral facial palsy in children. Therefore, careful investigation and differential diagnosis is essential. © The Author(s) 2013.

  10. Facial expressions and pair bonds in hylobatids.

    Science.gov (United States)

    Florkiewicz, Brittany; Skollar, Gabriella; Reichard, Ulrich H

    2018-06-06

    Facial expressions are an important component of primate communication that functions to transmit social information and modulate intentions and motivations. Chimpanzees and macaques, for example, produce a variety of facial expressions when communicating with conspecifics. Hylobatids also produce various facial expressions; however, the origin and function of these facial expressions are still largely unclear. It has been suggested that larger facial expression repertoires may have evolved in the context of social complexity, but this link has yet to be tested at a broader empirical basis. The social complexity hypothesis offers a possible explanation for the evolution of complex communicative signals such as facial expressions, because as the complexity of an individual's social environment increases so does the need for communicative signals. We used an intraspecies, pair-focused study design to test the link between facial expressions and sociality within hylobatids, specifically the strength of pair-bonds. The current study compared 206 hr of video and 103 hr of focal animal data for ten hylobatid pairs from three genera (Nomascus, Hoolock, and Hylobates) living at the Gibbon Conservation Center. Using video footage, we explored 5,969 facial expressions along three dimensions: repertoire use, repertoire breadth, and facial expression synchrony [FES]. We then used focal animal data to compare dimensions of facial expressiveness to pair bond strength and behavioral synchrony. Hylobatids in our study overlapped in only half of their facial expressions (50%) with the only other detailed, quantitative study of hylobatid facial expressions, while 27 facial expressions were uniquely observed in our study animals. Taken together, hylobatids have a large facial expression repertoire of at least 80 unique facial expressions. Contrary to our prediction, facial repertoire composition was not significantly correlated with pair bond strength, rates of territorial synchrony

  11. Seven Non-melanoma Features to Rule Out Facial Melanoma

    Directory of Open Access Journals (Sweden)

    Philipp Tschandl

    2017-08-01

    Full Text Available Facial melanoma is difficult to diagnose and dermatoscopic features are often subtle. Dermatoscopic non-melanoma patterns may have a comparable diagnostic value. In this pilot study, facial lesions were collected retrospectively, resulting in a case set of 339 melanomas and 308 non-melanomas. Lesions were evaluated for the prevalence (> 50% of lesional surface of 7 dermatoscopic non-melanoma features: scales, white follicles, erythema/reticular vessels, reticular and/or curved lines/fingerprints, structureless brown colour, sharp demarcation, and classic criteria of seborrhoeic keratosis. Melanomas had a lower number of non-melanoma patterns (p < 0.001. Scoring a lesion suspicious when no prevalent non-melanoma pattern is found resulted in a sensitivity of 88.5% and a specificity of 66.9% for the diagnosis of melanoma. Specificity was higher for solar lentigo (78.8% and seborrhoeic keratosis (74.3% and lower for actinic keratosis (61.4% and lichenoid keratosis (25.6%. Evaluation of prevalent non-melanoma patterns can provide slightly lower sensitivity and higher specificity in detecting facial melanoma compared with already known malignant features.

  12. Mapping (and modeling) physiological movements during EEG-fMRI recordings: the added value of the video acquired simultaneously.

    Science.gov (United States)

    Ruggieri, Andrea; Vaudano, Anna Elisabetta; Benuzzi, Francesca; Serafini, Marco; Gessaroli, Giuliana; Farinelli, Valentina; Nichelli, Paolo Frigio; Meletti, Stefano

    2015-01-15

    During resting-state EEG-fMRI studies in epilepsy, patients' spontaneous head-face movements occur frequently. We tested the usefulness of synchronous video recording to identify and model the fMRI changes associated with non-epileptic movements to improve sensitivity and specificity of fMRI maps related to interictal epileptiform discharges (IED). Categorization of different facial/cranial movements during EEG-fMRI was obtained for 38 patients [with benign epilepsy with centro-temporal spikes (BECTS, n=16); with idiopathic generalized epilepsy (IGE, n=17); focal symptomatic/cryptogenic epilepsy (n=5)]. We compared at single subject- and at group-level the IED-related fMRI maps obtained with and without additional regressors related to spontaneous movements. As secondary aim, we considered facial movements as events of interest to test the usefulness of video information to obtain fMRI maps of the following face movements: swallowing, mouth-tongue movements, and blinking. Video information substantially improved the identification and classification of the artifacts with respect to the EEG observation alone (mean gain of 28 events per exam). Inclusion of physiological activities as additional regressors in the GLM model demonstrated an increased Z-score and number of voxels of the global maxima and/or new BOLD clusters in around three quarters of the patients. Video-related fMRI maps for swallowing, mouth-tongue movements, and blinking were comparable to the ones obtained in previous task-based fMRI studies. Video acquisition during EEG-fMRI is a useful source of information. Modeling physiological movements in EEG-fMRI studies for epilepsy will lead to more informative IED-related fMRI maps in different epileptic conditions. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Facial talon cusps.

    LENUS (Irish Health Repository)

    McNamara, T

    1997-12-01

    This is a report of two patients with isolated facial talon cusps. One occurred on a permanent mandibular central incisor; the other on a permanent maxillary canine. The locations of these talon cusps suggests that the definition of a talon cusp include teeth in addition to the incisor group and be extended to include the facial aspect of teeth.

  14. Attention and memory bias to facial emotions underlying negative symptoms of schizophrenia.

    Science.gov (United States)

    Jang, Seon-Kyeong; Park, Seon-Cheol; Lee, Seung-Hwan; Cho, Yang Seok; Choi, Kee-Hong

    2016-01-01

    This study assessed bias in selective attention to facial emotions in negative symptoms of schizophrenia and its influence on subsequent memory for facial emotions. Thirty people with schizophrenia who had high and low levels of negative symptoms (n = 15, respectively) and 21 healthy controls completed a visual probe detection task investigating selective attention bias (happy, sad, and angry faces randomly presented for 50, 500, or 1000 ms). A yes/no incidental facial memory task was then completed. Attention bias scores and recognition errors were calculated. Those with high negative symptoms exhibited reduced attention to emotional faces relative to neutral faces; those with low negative symptoms showed the opposite pattern when faces were presented for 500 ms regardless of the valence. Compared to healthy controls, those with high negative symptoms made more errors for happy faces in the memory task. Reduced attention to emotional faces in the probe detection task was significantly associated with less pleasure and motivation and more recognition errors for happy faces in schizophrenia group only. Attention bias away from emotional information relatively early in the attentional process and associated diminished positive memory may relate to pathological mechanisms for negative symptoms.

  15. Does your profile say it all? Using demographics to predict expressive head movement during gameplay

    DEFF Research Database (Denmark)

    Asteriadis, Stylianos; Karpouzis, Kostas; Shaker, Noor

    2012-01-01

    interest (when the player loses during game play). Experi- ments were conducted on the Siren database, which consists of 58 par- ticipants, playing a modi¯ed version of the Super Mario. Here, as player demographics are considered the gender and age, while the statistical importance of certain facial cues......In this work, we explore the relation between expressive head movement and user pro¯le information in game play settings. Facial ges- ture analysis cues are statistically correlated with players' demographic characteristics in two di®erent settings, during game-play and at events of special...

  16. Imaging the Facial Nerve: A Contemporary Review

    International Nuclear Information System (INIS)

    Gupta, S.; Roehm, P.C.; Mends, F.; Hagiwara, M.; Fatterpekar, G.

    2013-01-01

    Imaging plays a critical role in the evaluation of a number of facial nerve disorders. The facial nerve has a complex anatomical course; thus, a thorough understanding of the course of the facial nerve is essential to localize the sites of pathology. Facial nerve dysfunction can occur from a variety of causes, which can often be identified on imaging. Computed tomography and magnetic resonance imaging are helpful for identifying bony facial canal and soft tissue abnormalities, respectively. Ultrasound of the facial nerve has been used to predict functional outcomes in patients with Bell’s palsy. More recently, diffusion tensor tractography has appeared as a new modality which allows three-dimensional display of facial nerve fibers

  17. Estudo da qualidade de vida em indivíduos com paralisia facial periférica crônica adquirida Study on quality of life in subjects with acquired chronic peripheral facial palsy

    Directory of Open Access Journals (Sweden)

    Rayné Moreira Melo Santos

    2012-08-01

    verified, as well as a closed questions interview about complaints with the facial movement was carried out, in order to check if there was interference from facial palsy in the social life of each subject. This was a cross-sectional study. Non-parametric Mann-Whitney and Fisher’s exact test, with significance level of 5%, were used in order to analyze the data. RESULTS: the degree of facial palsy were divided as it follows: I-II (Normal to mild dysfunction, III-IV (moderate to moderately severe dysfunction and V-VI (complete palsy severe dysfunction, according to House & Brackmann. In the answers about difficulties in professional and personal activities, Bell’s palsy individuals with normal to mild dysfunction have no complaints, in moderate to moderately severe dysfunction all answered very severe complaints and in an individual with complete palsy reported a lot of complaints. In the acoustic Schwannoma individuals, in the group classified as mild dysfunction, all answered no damage complaints, while among those with severe to complete palsy, one individual reported a lot of complaints in professional and personal activities. CONCLUSION: the acquired chronic peripheral facial palsy interfered with quality of life in subjects with more severe degrees of palsy.

  18. Speech Signal and Facial Image Processing for Obstructive Sleep Apnea Assessment

    Directory of Open Access Journals (Sweden)

    Fernando Espinoza-Cuadros

    2015-01-01

    Full Text Available Obstructive sleep apnea (OSA is a common sleep disorder characterized by recurring breathing pauses during sleep caused by a blockage of the upper airway (UA. OSA is generally diagnosed through a costly procedure requiring an overnight stay of the patient at the hospital. This has led to proposing less costly procedures based on the analysis of patients’ facial images and voice recordings to help in OSA detection and severity assessment. In this paper we investigate the use of both image and speech processing to estimate the apnea-hypopnea index, AHI (which describes the severity of the condition, over a population of 285 male Spanish subjects suspected to suffer from OSA and referred to a Sleep Disorders Unit. Photographs and voice recordings were collected in a supervised but not highly controlled way trying to test a scenario close to an OSA assessment application running on a mobile device (i.e., smartphones or tablets. Spectral information in speech utterances is modeled by a state-of-the-art low-dimensional acoustic representation, called i-vector. A set of local craniofacial features related to OSA are extracted from images after detecting facial landmarks using Active Appearance Models (AAMs. Support vector regression (SVR is applied on facial features and i-vectors to estimate the AHI.

  19. Facial nerve palsy due to birth trauma

    Science.gov (United States)

    Seventh cranial nerve palsy due to birth trauma; Facial palsy - birth trauma; Facial palsy - neonate; Facial palsy - infant ... An infant's facial nerve is also called the seventh cranial nerve. It can be damaged just before or at the time of delivery. ...

  20. Exploring the Diagnostic Utility of Facial Composites: Beliefs of Guilt Can Bias Perceived Similarity between Composite and Suspect

    Science.gov (United States)

    Charman, Steve D.; Gregory, Amy Hyman; Carlucci, Marianna

    2009-01-01

    Facial composite research has generally focused on the investigative utility of composites--using composites to find suspects. However, almost no work has examined the diagnostic utility of facial composites--the extent to which composites can be used as evidence against a suspect. For example, detectives and jurors may use the perceived…

  1. Detecting rapid mass movements using electrical self-potential measurements

    Science.gov (United States)

    Heinze, Thomas; Limbrock, Jonas; Pudasaini, Shiva P.; Kemna, Andreas

    2017-04-01

    Rapid mass movements are a latent danger for lives and infrastructure in almost any part of the world. Often such mass movements are caused by increasing pore pressure, for example, landslides after heavy rainfall or dam breaking after intrusion of water in the dam. Among several other geophysical methods used to observe water movement, the electrical self-potential method has been applied to a broad range of monitoring studies, especially focusing on volcanism and dam leakage but also during hydraulic fracturing and for earthquake prediction. Electrical self-potential signals may be caused by various mechanisms. Though, the most relevant source of the self-potential field in the given context is the streaming potential, caused by a flowing electrolyte through porous media with electrically charged internal surfaces. So far, existing models focus on monitoring water flow in non-deformable porous media. However, as the self-potential is sensitive to hydraulic parameters of the soil, any change in these parameters will cause an alteration of the electric signal. Mass movement will significantly influence the hydraulic parameters of the solid as well as the pressure field, assuming that fluid movement is faster than the pressure diffusion. We will present results of laboratory experiments under drained and undrained conditions with fluid triggered as well as manually triggered mass movements, monitored with self-potential measurements. For the undrained scenarios, we observe a clear correlation between the mass movements and signals in the electric potential, which clearly differ from the underlying potential variations due to increased saturation and fluid flow. In the drained experiments, we do not observe any measurable change in the electric potential. We therefore assume that change in fluid properties and release of the load causes disturbances in flow and streaming potential. We will discuss results of numerical simulations reproducing the observed effect. Our

  2. Electrical and transcranial magnetic stimulation of the facial nerve: diagnostic relevance in acute isolated facial nerve palsy.

    Science.gov (United States)

    Happe, Svenja; Bunten, Sabine

    2012-01-01

    Unilateral facial weakness is common. Transcranial magnetic stimulation (TMS) allows identification of a conduction failure at the level of the canalicular portion of the facial nerve and may help to confirm the diagnosis. We retrospectively analyzed 216 patients with the diagnosis of peripheral facial palsy. The electrophysiological investigations included the blink reflex, preauricular electrical stimulation and the response to TMS at the labyrinthine part of the canalicular proportion of the facial nerve within 3 days after symptom onset. A similar reduction or loss of the TMS amplitude (p facial palsy without being specific for Bell's palsy. These data shed light on the TMS-based diagnosis of peripheral facial palsy, an ability to localize the site of lesion within the Fallopian channel regardless of the underlying pathology. Copyright © 2012 S. Karger AG, Basel.

  3. 5-HTTLPR modulates the recognition accuracy and exploration of emotional facial expressions

    Directory of Open Access Journals (Sweden)

    Sabrina eBoll

    2014-07-01

    Full Text Available Individual genetic differences in the serotonin transporter-linked polymorphic region (5-HTTLPR have been associated with variations in the sensitivity to social and emotional cues as well as altered amygdala reactivity to facial expressions of emotion. Amygdala activation has further been shown to trigger gaze changes towards diagnostically relevant facial features. The current study examined whether altered socio-emotional reactivity in variants of the 5-HTTLPR promoter polymorphism reflects individual differences in attending to diagnostic features of facial expressions. For this purpose, visual exploration of emotional facial expressions was compared between a low (n=39 and a high (n=40 5-HTT expressing group of healthy human volunteers in an eye tracking paradigm. Emotional faces were presented while manipulating the initial fixation such that saccadic changes towards the eyes and towards the mouth could be identified. We found that the low versus the high 5-HTT group demonstrated greater accuracy with regard to emotion classifications, particularly when faces were presented for a longer duration. No group differences in gaze orientation towards diagnostic facial features could be observed. However, participants in the low 5-HTT group exhibited more and faster fixation changes for certain emotions when faces were presented for a longer duration and overall face fixation times were reduced for this genotype group. These results suggest that the 5-HTT gene influences social perception by modulating the general vigilance to social cues rather than selectively affecting the pre-attentive detection of diagnostic facial features.

  4. Facial Action Units Recognition: A Comparative Study

    NARCIS (Netherlands)

    Popa, M.C.; Rothkrantz, L.J.M.; Wiggers, P.; Braspenning, R.A.C.; Shan, C.

    2011-01-01

    Many approaches to facial expression recognition focus on assessing the six basic emotions (anger, disgust, happiness, fear, sadness, and surprise). Real-life situations proved to produce many more subtle facial expressions. A reliable way of analyzing the facial behavior is the Facial Action Coding

  5. Pediatric facial injuries: It's management.

    Science.gov (United States)

    Singh, Geeta; Mohammad, Shadab; Pal, U S; Hariram; Malkunje, Laxman R; Singh, Nimisha

    2011-07-01

    Facial injuries in children always present a challenge in respect of their diagnosis and management. Since these children are of a growing age every care should be taken so that later the overall growth pattern of the facial skeleton in these children is not jeopardized. To access the most feasible method for the management of facial injuries in children without hampering the facial growth. Sixty child patients with facial trauma were selected randomly for this study. On the basis of examination and investigations a suitable management approach involving rest and observation, open or closed reduction and immobilization, trans-osseous (TO) wiring, mini bone plate fixation, splinting and replantation, elevation and fixation of zygoma, etc. were carried out. In our study fall was the predominant cause for most of the facial injuries in children. There was a 1.09% incidence of facial injuries in children up to 16 years of age amongst the total patients. The age-wise distribution of the fracture amongst groups (I, II and III) was found to be 26.67%, 51.67% and 21.67% respectively. Male to female patient ratio was 3:1. The majority of the cases of facial injuries were seen in Group II patients (6-11 years) i.e. 51.67%. The mandibular fracture was found to be the most common fracture (0.60%) followed by dentoalveolar (0.27%), mandibular + midface (0.07) and midface (0.02%) fractures. Most of the mandibular fractures were found in the parasymphysis region. Simple fracture seems to be commonest in the mandible. Most of the mandibular and midface fractures in children were amenable to conservative therapies except a few which required surgical intervention.

  6. Effects of strong bite force on the facial vertical dimension of pembarong performers

    Directory of Open Access Journals (Sweden)

    C. Christina

    2017-06-01

    Full Text Available Background: A pembarong performer is a reog dancer who bites on a piece of wood inserted into his/her mouth in order to support a 60 kg Barongan or Dadak Merak mask. The teeth supporting this large and heavy mask are directly affected, as the strong bite force exerted during a dance could affect their vertical and sagital facial dimensions. Purpose: This study aimed to examine the influence of the bite force of pembarong performers due to their vertical and sagital facial dimensions. Methods: The study reported here involved fifteen pembarong performers and thirteen individuals with normal occlusion (with specific criteria. The bite force of these subjects was measured with a dental prescale sensor during its centric occlusion. A cephalometric variation measurement was subsequently performed on all subjects with its effects on their vertical and sagital facial dimensions being measured. Results: The bite force value of the pembarong performers was 394.3816 ± 7.68787 Newtons, while the normal occlusion was 371.7784 ± 4.77791 Newtons. There was no correlation between the bite force and the facial sagital dimension of these subjects. However, a significant correlation did exist between bite force and lower facial height/total facial height (LFH/TFH ratio (p = 0.013. Conversely, no significant correlation between bite force and posterior facial height/total facial height (PFH/TFH ratio (p = 0.785 was detected. There was an inverse correlation between bite force and LFH/TFH ratio (r = -.464. Conclusion: Bite force is directly related to the decrease in LFH/TFH ratio. Occlusal pressure exerted by the posterior teeth on the alveolar bone may increase bone density at the endosteal surface of cortical bone.

  7. Performance-driven facial animation: basic research on human judgments of emotional state in facial avatars.

    Science.gov (United States)

    Rizzo, A A; Neumann, U; Enciso, R; Fidaleo, D; Noh, J Y

    2001-08-01

    Virtual reality is rapidly evolving into a pragmatically usable technology for mental health (MH) applications. As the underlying enabling technologies continue to evolve and allow us to design more useful and usable structural virtual environments (VEs), the next important challenge will involve populating these environments with virtual representations of humans (avatars). This will be vital to create mental health VEs that leverage the use of avatars for applications that require human-human interaction and communication. As Alessi et al.1 pointed out at the 8th Annual Medicine Meets Virtual Reality Conference (MMVR8), virtual humans have mainly appeared in MH applications to "serve the role of props, rather than humans." More believable avatars inhabiting VEs would open up possibilities for MH applications that address social interaction, communication, instruction, assessment, and rehabilitation issues. They could also serve to enhance realism that might in turn promote the experience of presence in VR. Additionally, it will soon be possible to use computer-generated avatars that serve to provide believable dynamic facial and bodily representations of individuals communicating from a distance in real time. This could support the delivery, in shared virtual environments, of more natural human interaction styles, similar to what is used in real life between people. These techniques could enhance communication and interaction by leveraging our natural sensing and perceiving capabilities and offer the potential to model human-computer-human interaction after human-human interaction. To enhance the authenticity of virtual human representations, advances in the rendering of facial and gestural behaviors that support implicit communication will be needed. In this regard, the current paper presents data from a study that compared human raters' judgments of emotional expression between actual video clips of facial expressions and identical expressions rendered on a

  8. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Directory of Open Access Journals (Sweden)

    Vasanthan Maruthapillai

    Full Text Available In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face and change in marker distance (change in distance between the original and new marker positions, were used to extract three statistical features (mean, variance, and root mean square from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  9. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Science.gov (United States)

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  10. Virtual 3-D Facial Reconstruction

    Directory of Open Access Journals (Sweden)

    Martin Paul Evison

    2000-06-01

    Full Text Available Facial reconstructions in archaeology allow empathy with people who lived in the past and enjoy considerable popularity with the public. It is a common misconception that facial reconstruction will produce an exact likeness; a resemblance is the best that can be hoped for. Research at Sheffield University is aimed at the development of a computer system for facial reconstruction that will be accurate, rapid, repeatable, accessible and flexible. This research is described and prototypical 3-D facial reconstructions are presented. Interpolation models simulating obesity, ageing and ethnic affiliation are also described. Some strengths and weaknesses in the models, and their potential for application in archaeology are discussed.

  11. Facial Baroparesis Caused by Scuba Diving

    Directory of Open Access Journals (Sweden)

    Daisuke Kamide

    2012-01-01

    tympanic membrane and right facial palsy without other neurological findings. But facial palsy was disappeared immediately after myringotomy. We considered that the etiology of this case was neuropraxia of facial nerve in middle ear caused by over pressure of middle ear.

  12. Botulinum Toxin (Botox) for Facial Wrinkles

    Science.gov (United States)

    ... Stories Español Eye Health / Eye Health A-Z Botulinum Toxin (Botox) for Facial Wrinkles Sections Botulinum Toxin (Botox) ... Facial Wrinkles How Does Botulinum Toxin (Botox) Work? Botulinum Toxin (Botox) for Facial Wrinkles Leer en Español: La ...

  13. Repair of facial nerve defects with decellularized artery allografts containing autologous adipose-derived stem cells in a rat model.

    Science.gov (United States)

    Sun, Fei; Zhou, Ke; Mi, Wen-Juan; Qiu, Jian-Hua

    2011-07-20

    The purpose of this study was to investigate the effects of a decellularized artery allograft containing autologous adipose-derived stem cells (ADSCs) on an 8-mm facial nerve branch lesion in a rat model. At 8 weeks postoperatively, functional evaluation of unilateral vibrissae movements, morphological analysis of regenerated nerve segments and retrograde labeling of facial motoneurons were all analyzed. Better regenerative outcomes associated with functional improvement, great axonal growth, and improved target reinnervation were achieved in the artery-ADSCs group (2), whereas the cut nerves sutured with artery conduits alone (group 1) achieved inferior restoration. Furthermore, transected nerves repaired with nerve autografts (group 3) resulted in significant recovery of whisking, maturation of myelinated fibers and increased number of labeled facial neurons, and the latter two parameters were significantly different from those of group 2. Collectively, though our combined use of a decellularized artery allograft with autologous ADSCs achieved regenerative outcomes inferior to a nerve autograft, it certainly showed a beneficial effect on promoting nerve regeneration and thus represents an alternative approach for the reconstruction of peripheral facial nerve defects. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. Reconocimiento facial

    OpenAIRE

    Urtiaga Abad, Juan Alfonso

    2014-01-01

    El presente proyecto trata sobre uno de los campos más problemáticos de la inteligencia artificial, el reconocimiento facial. Algo tan sencillo para las personas como es reconocer una cara conocida se traduce en complejos algoritmos y miles de datos procesados en cuestión de segundos. El proyecto comienza con un estudio del estado del arte de las diversas técnicas de reconocimiento facial, desde las más utilizadas y probadas como el PCA y el LDA, hasta técnicas experimentales que utilizan ...

  15. Facial Displays Are Tools for Social Influence.

    Science.gov (United States)

    Crivelli, Carlos; Fridlund, Alan J

    2018-05-01

    Based on modern theories of signal evolution and animal communication, the behavioral ecology view of facial displays (BECV) reconceives our 'facial expressions of emotion' as social tools that serve as lead signs to contingent action in social negotiation. BECV offers an externalist, functionalist view of facial displays that is not bound to Western conceptions about either expressions or emotions. It easily accommodates recent findings of diversity in facial displays, their public context-dependency, and the curious but common occurrence of solitary facial behavior. Finally, BECV restores continuity of human facial behavior research with modern functional accounts of non-human communication, and provides a non-mentalistic account of facial displays well-suited to new developments in artificial intelligence and social robotics. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Does facial resemblance enhance cooperation?

    Directory of Open Access Journals (Sweden)

    Trang Giang

    Full Text Available Facial self-resemblance has been proposed to serve as a kinship cue that facilitates cooperation between kin. In the present study, facial resemblance was manipulated by morphing stimulus faces with the participants' own faces or control faces (resulting in self-resemblant or other-resemblant composite faces. A norming study showed that the perceived degree of kinship was higher for the participants and the self-resemblant composite faces than for actual first-degree relatives. Effects of facial self-resemblance on trust and cooperation were tested in a paradigm that has proven to be sensitive to facial trustworthiness, facial likability, and facial expression. First, participants played a cooperation game in which the composite faces were shown. Then, likability ratings were assessed. In a source memory test, participants were required to identify old and new faces, and were asked to remember whether the faces belonged to cooperators or cheaters in the cooperation game. Old-new recognition was enhanced for self-resemblant faces in comparison to other-resemblant faces. However, facial self-resemblance had no effects on the degree of cooperation in the cooperation game, on the emotional evaluation of the faces as reflected in the likability judgments, and on the expectation that a face belonged to a cooperator rather than to a cheater. Therefore, the present results are clearly inconsistent with the assumption of an evolved kin recognition module built into the human face recognition system.

  17. Facial skin care products and cosmetics.

    Science.gov (United States)

    Draelos, Zoe Diana

    2014-01-01

    Facial skin care products and cosmetics can both aid or incite facial dermatoses. Properly selected skin care can create an environment for barrier repair aiding in the re-establishment of a healing biofilm and diminution of facial redness; however, skin care products that aggressively remove intercellular lipids or cause irritation must be eliminated before the red face will resolve. Cosmetics are an additive variable either aiding or challenging facial skin health. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Facial nerve palsy after reactivation of herpes simplex virus type 1 in diabetic mice.

    Science.gov (United States)

    Esaki, Shinichi; Yamano, Koji; Katsumi, Sachiyo; Minakata, Toshiya; Murakami, Shingo

    2015-04-01

    Bell's palsy is highly associated with diabetes mellitus (DM). Either the reactivation of herpes simplex virus type 1 (HSV-1) or diabetic mononeuropathy has been proposed to cause the facial paralysis observed in DM patients. However, distinguishing whether the facial palsy is caused by herpetic neuritis or diabetic mononeuropathy is difficult. We previously reported that facial paralysis was aggravated in DM mice after HSV-1 inoculation of the murine auricle. In the current study, we induced HSV-1 reactivation by an auricular scratch following DM induction with streptozotocin (STZ). Controlled animal study. Diabetes mellitus was induced with streptozotocin injection in only mice that developed transient facial nerve paralysis with HSV-1. Recurrent facial palsy was induced after HSV-1 reactivation by auricular scratch. After DM induction, the number of cluster of differentiation 3 (CD3)(+) T cells decreased by 70% in the DM mice, and facial nerve palsy recurred in 13% of the DM mice. Herpes simplex virus type 1 deoxyribonucleic acid (DNA) was detected in the facial nerve of all of the DM mice with palsy, and HSV-1 capsids were found in the geniculate ganglion using electron microscopy. Herpes simplex virus type 1 DNA was also found in some of the DM mice without palsy, which suggested the subclinical reactivation of HSV-1. These results suggested that HSV-1 reactivation in the geniculate ganglion may be the main causative factor of the increased incidence of facial paralysis in DM patients. © 2014 The American Laryngological, Rhinological and Otological Society, Inc.

  19. Photographic Standards for Patients With Facial Palsy and Recommendations by Members of the Sir Charles Bell Society.

    Science.gov (United States)

    Santosa, Katherine B; Fattah, Adel; Gavilán, Javier; Hadlock, Tessa A; Snyder-Warwick, Alison K

    2017-07-01

    photographic standards for the population with facial palsy. Eighty-three of 151 members (55%) of the Sir Charles Bell Society responded to the survey. All survey respondents used photographic documentation, but there was variability in which facial expressions were used. Eighty-two percent (68 of 83) used some form of videography. From these data, we propose a set of minimum photographic standards for patients with facial palsy, including the following 10 static views: at rest or repose, small closed-mouth smile, large smile showing teeth, elevation of eyebrows, closure of eyes gently, closure of eyes tightly, puckering of lips, showing bottom teeth, snarling or wrinkling of the nose, and nasal base view. There is no consensus on photographic standardization to report outcomes for patients with facial palsy. Minimum photographic standards for facial paralysis publications are proposed. Videography of the dynamic movements of these views should also be recorded. NA.

  20. The use of digital image speckle correlation to measure the mechanical properties of skin and facial muscular activity

    Science.gov (United States)

    Staloff, Isabelle Afriat

    Skin mechanical properties have been extensively studied and have led to an understanding of the structure and role of the collagen and elastin fibers network in the dermis and their changes due to aging. All these techniques have either isolated the skin from its natural environment (in vitro), or, when studied in vivo, attempted to minimize the effect of the underlying tissues and muscles. The human facial region is unique compared to the other parts of the body in that the underlying musculature runs through the subcutaneous tissue and is directly connected to the dermis with collagen based fibrous tissues. These fibrous tissues comprise the superficial musculoaponeurotic system, commonly referred to as the SMAS layer. Retaining ligaments anchor the skin to the periosteum, and hold the dermis to the SMAS. In addition, traditional techniques generally collect an average response of the skin. Data gathered in this manner is incomplete as the skin is anisotropic and under constant tension. We therefore introduce the Digital Image Speckle Correlation (DISC) method that maps in two dimensions the skin deformation under the complex set of forces involved during muscular activity. DISC, a non-contact in vivo technique, generates spatial resolved information. By observing the detailed motion of the facial skin we can infer the manner in which the complex ensemble of forces induced by movement of the muscles distribute and dissipate on the skin. By analyzing the effect of aging on the distribution of these complex forces we can measure its impact on skin elasticity and quantify the efficacy of skin care products. In addition, we speculate on the mechanism of wrinkle formation. Furthermore, we investigate the use of DISC to map the mechanism of film formation on skin of various polymers. Finally, we show that DISC can detect the involuntary facial muscular activity induced by various fragrances.

  1. Sleep-related movement disorders.

    Science.gov (United States)

    Merlino, Giovanni; Gigli, Gian Luigi

    2012-06-01

    Several movement disorders may occur during nocturnal rest disrupting sleep. A part of these complaints is characterized by relatively simple, non-purposeful and usually stereotyped movements. The last version of the International Classification of Sleep Disorders includes these clinical conditions (i.e. restless legs syndrome, periodic limb movement disorder, sleep-related leg cramps, sleep-related bruxism and sleep-related rhythmic movement disorder) under the category entitled sleep-related movement disorders. Moreover, apparently physiological movements (e.g. alternating leg muscle activation and excessive hypnic fragmentary myoclonus) can show a high frequency and severity impairing sleep quality. Clinical and, in specific cases, neurophysiological assessments are required to detect the presence of nocturnal movement complaints. Patients reporting poor sleep due to these abnormal movements should undergo non-pharmacological or pharmacological treatments.

  2. Real-time monitoring system for elderly people in detecting falling movement using accelerometer and gyroscope

    Science.gov (United States)

    Siregar, B.; Andayani, U.; Bahri, R. P.; Seniman; Fahmi, F.

    2018-03-01

    Most of the elderly people is experiencing a decrease in physical quality, especially the weakness in the legs. This will cause elderly easy to fall and can have a serious impact on their health if not getting help very quickly. It is, therefore, necessary to take immediate action against the falling cases experienced by the elderly. One such action is by developing supervision and detecting of falling movements in real-time, which is then the connection to a member of the family. In this research, we used Arduino Uno as a microcontroller, sensor accelerometer, and gyroscope that serves to measure falling movement of the elderly person and supported by GPS technology Ublox Neo 6M to provide information about coordinates. The result was the high accuracy of delivering notification data to server and accuracy of data delivery to family notification equal to 93,75%. The system successfully detects the direction of falling: forward, backward, left or right and able to distinguish between unintentional falling and conscious falling like a bow or prostrate position.

  3. Pseudotumoural hypertrophic neuritis of the facial nerve

    OpenAIRE

    Zanoletti, E; Mazzoni, A; Barbò, R

    2008-01-01

    In a retrospective study of our cases of recurrent paralysis of the facial nerve of tumoural and non-tumoural origin, a tumour-like lesion of the intra-temporal course of the facial nerve, mimicking facial nerve schwannoma, was found and investigated in 4 cases. This was defined as, pseudotumoral hypertrophic neuritis of the facial nerve. The picture was one of recurrent acute facial palsy with incomplete recovery and imaging of a benign tumour. It was different from the well-known recurrent ...

  4. Design and Lab Experiment of a Stress Detection Service based on Mouse Movements

    OpenAIRE

    Kowatsch, Tobias; Wahle, Fabian; Filler, Andreas

    2017-01-01

    Workplace stress can negatively affect the health condition of employees and with it, the performance of organizations. Although there exist approaches to measure work-related stress, two major limitations are the low resolution of stress data and its obtrusive measurement. The current work applies design science research with the goal to design, implement and evaluate a Stress Detection Service (SDS) that senses the degree of work-related stress solely based on mouse movements of knowledge w...

  5. Imaging of the facial nerve

    Energy Technology Data Exchange (ETDEWEB)

    Veillon, F. [Service de Radiologie I, Hopital de Hautepierre, 67098 Strasbourg Cedex (France)], E-mail: Francis.Veillon@chru-strasbourg.fr; Ramos-Taboada, L.; Abu-Eid, M. [Service de Radiologie I, Hopital de Hautepierre, 67098 Strasbourg Cedex (France); Charpiot, A. [Service d' ORL, Hopital de Hautepierre, 67098 Strasbourg Cedex (France); Riehm, S. [Service de Radiologie I, Hopital de Hautepierre, 67098 Strasbourg Cedex (France)

    2010-05-15

    The facial nerve is responsible for the motor innervation of the face. It has a visceral motor function (lacrimal, submandibular, sublingual glands and secretion of the nose); it conveys a great part of the taste fibers, participates to the general sensory of the auricle (skin of the concha) and the wall of the external auditory meatus. The facial mimic, production of tears, nasal flow and salivation all depend on the facial nerve. In order to image the facial nerve it is mandatory to be knowledgeable about its normal anatomy including the course of its efferent and afferent fibers and about relevant technical considerations regarding CT and MR to be able to achieve high-resolution images of the nerve.

  6. Reconstruction of Multiple Facial Nerve Branches Using Skeletal Muscle-Derived Multipotent Stem Cell Sheet-Pellet Transplantation.

    Directory of Open Access Journals (Sweden)

    Kosuke Saito

    Full Text Available Head and neck cancer is often diagnosed at advanced stages, and surgical resection with wide margins is generally indicated, despite this treatment being associated with poor postoperative quality of life (QOL. We have previously reported on the therapeutic effects of skeletal muscle-derived multipotent stem cells (Sk-MSCs, which exert reconstitution capacity for muscle-nerve-blood vessel units. Recently, we further developed a 3D patch-transplantation system using Sk-MSC sheet-pellets. The aim of this study is the application of the 3D Sk-MSC transplantation system to the reconstitution of facial complex nerve-vascular networks after severe damage. Mouse experiments were performed for histological analysis and rats were used for functional examinations. The Sk-MSC sheet-pellets were prepared from GFP-Tg mice and SD rats, and were transplanted into the facial resection model (ST. Culture medium was transplanted as a control (NT. In the mouse experiment, facial-nerve-palsy (FNP scoring was performed weekly during the recovery period, and immunohistochemistry was used for the evaluation of histological recovery after 8 weeks. In rats, contractility of facial muscles was measured via electrical stimulation of facial nerves root, as the marker of total functional recovery at 8 weeks after transplantation. The ST-group showed significantly higher FNP (about three fold scores when compared to the NT-group after 2-8 weeks. Similarly, significant functional recovery of whisker movement muscles was confirmed in the ST-group at 8 weeks after transplantation. In addition, engrafted GFP+ cells formed complex branches of nerve-vascular networks, with differentiation into Schwann cells and perineurial/endoneurial cells, as well as vascular endothelial and smooth muscle cells. Thus, Sk-MSC sheet-pellet transplantation is potentially useful for functional reconstitution therapy of large defects in facial nerve-vascular networks.

  7. Reconstruction of Multiple Facial Nerve Branches Using Skeletal Muscle-Derived Multipotent Stem Cell Sheet-Pellet Transplantation.

    Science.gov (United States)

    Saito, Kosuke; Tamaki, Tetsuro; Hirata, Maki; Hashimoto, Hiroyuki; Nakazato, Kenei; Nakajima, Nobuyuki; Kazuno, Akihito; Sakai, Akihiro; Iida, Masahiro; Okami, Kenji

    2015-01-01

    Head and neck cancer is often diagnosed at advanced stages, and surgical resection with wide margins is generally indicated, despite this treatment being associated with poor postoperative quality of life (QOL). We have previously reported on the therapeutic effects of skeletal muscle-derived multipotent stem cells (Sk-MSCs), which exert reconstitution capacity for muscle-nerve-blood vessel units. Recently, we further developed a 3D patch-transplantation system using Sk-MSC sheet-pellets. The aim of this study is the application of the 3D Sk-MSC transplantation system to the reconstitution of facial complex nerve-vascular networks after severe damage. Mouse experiments were performed for histological analysis and rats were used for functional examinations. The Sk-MSC sheet-pellets were prepared from GFP-Tg mice and SD rats, and were transplanted into the facial resection model (ST). Culture medium was transplanted as a control (NT). In the mouse experiment, facial-nerve-palsy (FNP) scoring was performed weekly during the recovery period, and immunohistochemistry was used for the evaluation of histological recovery after 8 weeks. In rats, contractility of facial muscles was measured via electrical stimulation of facial nerves root, as the marker of total functional recovery at 8 weeks after transplantation. The ST-group showed significantly higher FNP (about three fold) scores when compared to the NT-group after 2-8 weeks. Similarly, significant functional recovery of whisker movement muscles was confirmed in the ST-group at 8 weeks after transplantation. In addition, engrafted GFP+ cells formed complex branches of nerve-vascular networks, with differentiation into Schwann cells and perineurial/endoneurial cells, as well as vascular endothelial and smooth muscle cells. Thus, Sk-MSC sheet-pellet transplantation is potentially useful for functional reconstitution therapy of large defects in facial nerve-vascular networks.

  8. Serum levels of IGF-1 are related to human skin characteristics including the conspicuousness of facial pores.

    Science.gov (United States)

    Sugiyama-Nakagiri, Y; Naoe, A; Ohuchi, A; Kitahara, T

    2011-04-01

    Conspicuous facial pores are one type of serious aesthetic defects for many women. However, the mechanism(s) that underlie the conspicuousness of facial pores remains unclear. We previously characterized the epidermal architecture around facial pores that correlates with the appearance of those pores in various ethnic groups including Japanese. The goal of this study was to evaluate the possible relationships between facial pore size, the severity of impairment of epidermal architecture around facial pores and sebum output levels to investigate the possible role of IGF-1 in the pathogenesis of conspicuous facial pores. The subjects consisted of 38 healthy Japanese women (aged 22-41 years). IGF-1 was measured using immunoradiometric assay. Surface replicas were collected to compare pore sizes of cheek skin and horizontal cross-section images of cheek skin were obtained non-invasively from the same subjects using in vivo confocal laser scanning microscopy and the severity of impairment of epidermal architecture around facial pores was determined. The skin surface lipids of each subject were collected from their cheeks and lipid classes were determined using gas chromatography/flame ionization detection. The serum level of IGF-1 correlated significantly with total pore area (R = 0.36, P facial pores (R = 0.43, P pore area (R = 0.32, P facial skin characteristics including facial pore size and with the severity of impairment of epidermal architecture around facial pores. © 2010 The Authors. Journal compilation © 2010 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  9. Forensic Facial Reconstruction: The Final Frontier.

    Science.gov (United States)

    Gupta, Sonia; Gupta, Vineeta; Vij, Hitesh; Vij, Ruchieka; Tyagi, Nutan

    2015-09-01

    Forensic facial reconstruction can be used to identify unknown human remains when other techniques fail. Through this article, we attempt to review the different methods of facial reconstruction reported in literature. There are several techniques of doing facial reconstruction, which vary from two dimensional drawings to three dimensional clay models. With the advancement in 3D technology, a rapid, efficient and cost effective computerized 3D forensic facial reconstruction method has been developed which has brought down the degree of error previously encountered. There are several methods of manual facial reconstruction but the combination Manchester method has been reported to be the best and most accurate method for the positive recognition of an individual. Recognition allows the involved government agencies to make a list of suspected victims'. This list can then be narrowed down and a positive identification may be given by the more conventional method of forensic medicine. Facial reconstruction allows visual identification by the individual's family and associates to become easy and more definite.

  10. Magnetic resonance imaging of facial muscles

    Energy Technology Data Exchange (ETDEWEB)

    Farrugia, M.E. [Department of Clinical Neurology, University of Oxford, Radcliffe Infirmary, Oxford (United Kingdom)], E-mail: m.e.farrugia@doctors.org.uk; Bydder, G.M. [Department of Radiology, University of California, San Diego, CA 92103-8226 (United States); Francis, J.M.; Robson, M.D. [OCMR, Department of Cardiovascular Medicine, University of Oxford, John Radcliffe Hospital, Oxford (United Kingdom)

    2007-11-15

    Facial and tongue muscles are commonly involved in patients with neuromuscular disorders. However, these muscles are not as easily accessible for biopsy and pathological examination as limb muscles. We have previously investigated myasthenia gravis patients with MuSK antibodies for facial and tongue muscle atrophy using different magnetic resonance imaging sequences, including ultrashort echo time techniques and image analysis tools that allowed us to obtain quantitative assessments of facial muscles. This imaging study had shown that facial muscle measurement is possible and that useful information can be obtained using a quantitative approach. In this paper we aim to review in detail the methods that we applied to our study, to enable clinicians to study these muscles within the domain of neuromuscular disease, oncological or head and neck specialties. Quantitative assessment of the facial musculature may be of value in improving the understanding of pathological processes occurring within facial muscles in certain neuromuscular disorders.

  11. Magnetic resonance imaging of facial muscles

    International Nuclear Information System (INIS)

    Farrugia, M.E.; Bydder, G.M.; Francis, J.M.; Robson, M.D.

    2007-01-01

    Facial and tongue muscles are commonly involved in patients with neuromuscular disorders. However, these muscles are not as easily accessible for biopsy and pathological examination as limb muscles. We have previously investigated myasthenia gravis patients with MuSK antibodies for facial and tongue muscle atrophy using different magnetic resonance imaging sequences, including ultrashort echo time techniques and image analysis tools that allowed us to obtain quantitative assessments of facial muscles. This imaging study had shown that facial muscle measurement is possible and that useful information can be obtained using a quantitative approach. In this paper we aim to review in detail the methods that we applied to our study, to enable clinicians to study these muscles within the domain of neuromuscular disease, oncological or head and neck specialties. Quantitative assessment of the facial musculature may be of value in improving the understanding of pathological processes occurring within facial muscles in certain neuromuscular disorders

  12. Perceived functional impact of abnormal facial appearance.

    Science.gov (United States)

    Rankin, Marlene; Borah, Gregory L

    2003-06-01

    Functional facial deformities are usually described as those that impair respiration, eating, hearing, or speech. Yet facial scars and cutaneous deformities have a significant negative effect on social functionality that has been poorly documented in the scientific literature. Insurance companies are declining payments for reconstructive surgical procedures for facial deformities caused by congenital disabilities and after cancer or trauma operations that do not affect mechanical facial activity. The purpose of this study was to establish a large, sample-based evaluation of the perceived social functioning, interpersonal characteristics, and employability indices for a range of facial appearances (normal and abnormal). Adult volunteer evaluators (n = 210) provided their subjective perceptions based on facial physical appearance, and an analysis of the consequences of facial deformity on parameters of preferential treatment was performed. A two-group comparative research design rated the differences among 10 examples of digitally altered facial photographs of actual patients among various age and ethnic groups with "normal" and "abnormal" congenital deformities or posttrauma scars. Photographs of adult patients with observable congenital and posttraumatic deformities (abnormal) were digitally retouched to eliminate the stigmatic defects (normal). The normal and abnormal photographs of identical patients were evaluated by the large sample study group on nine parameters of social functioning, such as honesty, employability, attractiveness, and effectiveness, using a visual analogue rating scale. Patients with abnormal facial characteristics were rated as significantly less honest (p = 0.007), less employable (p = 0.001), less trustworthy (p = 0.01), less optimistic (p = 0.001), less effective (p = 0.02), less capable (p = 0.002), less intelligent (p = 0.03), less popular (p = 0.001), and less attractive (p = 0.001) than were the same patients with normal facial

  13. Possibilities of pfysiotherapy in facial nerve paresis

    OpenAIRE

    ZIFČÁKOVÁ, Šárka

    2015-01-01

    The bachelor thesis addresses paresis of the facial nerve. The facial nerve paresis is a rather common illness, which cannot be often cured without consequences despite all the modern treatments. The paresis of the facial nerve occurs in two forms, central and peripheral. A central paresis is a result of a lesion located above the motor nucleus of the facial nerve. A peripheral paresis is caused by a lesion located either in the location of the motor nucleus or in the course of the facial ner...

  14. Facial colliculus syndrome

    Directory of Open Access Journals (Sweden)

    Rupinderjeet Kaur

    2016-01-01

    Full Text Available A male patient presented with horizontal diplopia and conjugate gaze palsy. Magnetic resonance imaging (MRI revealed acute infarct in right facial colliculus which is an anatomical elevation on the dorsal aspect of Pons. This elevation is due the 6th cranial nerve nucleus and the motor fibres of facial nerve which loop dorsal to this nucleus. Anatomical correlation of the clinical symptoms is also depicted in this report.

  15. Cranio-facial clefts in pre-hispanic America.

    Science.gov (United States)

    Marius-Nunez, A L; Wasiak, D T

    2015-10-01

    Among the representations of congenital malformations in Moche ceramic art, cranio-facial clefts have been portrayed in pottery found in Moche burials. These pottery vessels were used as domestic items during lifetime and funerary offerings upon death. The aim of this study was to examine archeological evidence for representations of cranio-facial cleft malformations in Moche vessels. Pottery depicting malformations of the midface in Moche collections in Lima-Peru were studied. The malformations portrayed on pottery were analyzed using the Tessier classification. Photographs were authorized by the Larco Museo.Three vessels were observed to have median cranio-facial dysraphia in association with midline cleft of the lower lip with cleft of the mandible. ML001489 portrays a median cranio-facial dysraphia with an orbital cleft and a midline cleft of the lower lip extending to the mandible. ML001514 represents a median facial dysraphia in association with an orbital facial cleft and a vertical orbital dystopia. ML001491 illustrates a median facial cleft with a soft tissue cleft. Three cases of midline, orbital and lateral facial clefts have been portrayed in Moche full-figure portrait vessels. They represent the earliest registries of congenital cranio-facial malformations in ancient Peru. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Unmasking Zorro: functional importance of the facial mask in the Masked Shrike (Lanius nubicus)

    OpenAIRE

    Reuven Yosef; Piotr Zduniak; Piotr Tryjanowski

    2012-01-01

    The facial mask is a prominent feature in the animal kingdom. We hypothesized that the facial mask of shrikes allows them to hunt into the sun, which accords them detection and surprise-attack capabilities. We conducted a field experiment to determine whether the mask facilitated foraging while facing into the sun. Male shrikes with white-painted masks hunted facing away from the sun more than birds with black-painted masks, which are the natural color, and more than individuals in the contro...

  17. Real Time Facial Expression Recognition Using Webcam and SDK Affectiva

    Directory of Open Access Journals (Sweden)

    Martin Magdin

    2018-06-01

    Full Text Available Facial expression is an essential part of communication. For this reason, the issue of human emotions evaluation using a computer is a very interesting topic, which has gained more and more attention in recent years. It is mainly related to the possibility of applying facial expression recognition in many fields such as HCI, video games, virtual reality, and analysing customer satisfaction etc. Emotions determination (recognition process is often performed in 3 basic phases: face detection, facial features extraction, and last stage - expression classification. Most often you can meet the so-called Ekman’s classification of 6 emotional expressions (or 7 - neutral expression as well as other types of classification - the Russell circular model, which contains up to 24 or the Plutchik’s Wheel of Emotions. The methods used in the three phases of the recognition process have not only improved over the last 60 years, but new methods and algorithms have also emerged that can determine the ViolaJones detector with greater accuracy and lower computational demands. Therefore, there are currently various solutions in the form of the Software Development Kit (SDK. In this publication, we point to the proposition and creation of our system for real-time emotion classification. Our intention was to create a system that would use all three phases of the recognition process, work fast and stable in real time. That’s why we’ve decided to take advantage of existing Affectiva SDKs. By using the classic webcamera we can detect facial landmarks on the image automatically using the Software Development Kit (SDK from Affectiva. Geometric feature based approach is used for feature extraction. The distance between landmarks is used as a feature, and for selecting an optimal set of features, the brute force method is used. The proposed system uses neural network algorithm for classification. The proposed system recognizes 6 (respectively 7 facial expressions

  18. Tuberous Sclerosis Complex in 29 Children: Clinical and Genetic Analysis and Facial Angiofibroma Responses to Topical Sirolimus.

    Science.gov (United States)

    Wang, Senfen; Liu, Yuanxiang; Wei, Jinghai; Zhang, Jian; Wang, Zhaoyang; Xu, Zigang

    2017-09-01

    Tuberous sclerosis complex (TSC) is a genetic disorder and facial angiofibromas are disfiguring facial lesions. The aim of this study was to analyze the clinical and genetic features of TSC and to assess the treatment of facial angiofibromas using topical sirolimus in Chinese children. Information was collected on 29 patients with TSC. Genetic analyses were performed in 12 children and their parents. Children were treated with 0.1% sirolimus ointment for 36 weeks. Clinical efficacy and plasma sirolimus concentrations were evaluated at baseline and 12, 24, and 36 weeks. Twenty-seven (93%) of the 29 patients had hypomelanotic macules and 15 (52%) had shagreen patch; 11 of the 12 (92%) who underwent genetic analysis had gene mutations in the TSC1 or TSC2 gene. Twenty-four children completed 36 weeks of treatment with topical sirolimus; facial angiofibromas were clinically undetectable in four (17%). The mean decrease in the Facial Angiofibroma Severity Index (FASI) score at 36 weeks was 47.6 ± 30.4%. There was no significant difference in the FASI score between weeks 24 and 36 (F = 1.00, p = 0.33). There was no detectable systemic absorption of sirolimus. Hypomelanotic macules are often the first sign of TSC. Genetic testing has a high detection rate in patients with a clinical diagnosis of TSC. Topical sirolimus appears to be both effective and well-tolerated as a treatment of facial angiofibromas in children with TSC. The response typically plateaus after 12 to 24 weeks of treatment. © 2017 Wiley Periodicals, Inc.

  19. Facial expression primes and implicit regulation of negative emotion.

    Science.gov (United States)

    Yoon, HeungSik; Kim, Shin Ah; Kim, Sang Hee

    2015-06-17

    An individual's responses to emotional information are influenced not only by the emotional quality of the information, but also by the context in which the information is presented. We hypothesized that facial expressions of happiness and anger would serve as primes to modulate subjective and neural responses to subsequently presented negative information. To test this hypothesis, we conducted a functional MRI study in which the brains of healthy adults were scanned while they performed an emotion-rating task. During the task, participants viewed a series of negative and neutral photos, one at a time; each photo was presented after a picture showing a face expressing a happy, angry, or neutral emotion. Brain imaging results showed that compared with neutral primes, happy facial primes increased activation during negative emotion in the dorsal anterior cingulated cortex and the right ventrolateral prefrontal cortex, which are typically implicated in conflict detection and implicit emotion control, respectively. Conversely, relative to neutral primes, angry primes activated the right middle temporal gyrus and the left supramarginal gyrus during the experience of negative emotion. Activity in the amygdala in response to negative emotion was marginally reduced after exposure to happy primes compared with angry primes. Relative to neutral primes, angry facial primes increased the subjectively experienced intensity of negative emotion. The current study results suggest that prior exposure to facial expressions of emotions modulates the subsequent experience of negative emotion by implicitly activating the emotion-regulation system.

  20. [Surgical treatment in otogenic facial nerve palsy].

    Science.gov (United States)

    Feng, Guo-Dong; Gao, Zhi-Qiang; Zhai, Meng-Yao; Lü, Wei; Qi, Fang; Jiang, Hong; Zha, Yang; Shen, Peng

    2008-06-01

    To study the character of facial nerve palsy due to four different auris diseases including chronic otitis media, Hunt syndrome, tumor and physical or chemical factors, and to discuss the principles of the surgical management of otogenic facial nerve palsy. The clinical characters of 24 patients with otogenic facial nerve palsy because of the four different auris diseases were retrospectively analyzed, all the cases were performed surgical management from October 1991 to March 2007. Facial nerve function was evaluated with House-Brackmann (HB) grading system. The 24 patients including 10 males and 14 females were analysis, of whom 12 cases due to cholesteatoma, 3 cases due to chronic otitis media, 3 cases due to Hunt syndrome, 2 cases resulted from acute otitis media, 2 cases due to physical or chemical factors and 2 cases due to tumor. All cases were treated with operations included facial nerve decompression, lesion resection with facial nerve decompression and lesion resection without facial nerve decompression, 1 patient's facial nerve was resected because of the tumor. According to HB grade system, I degree recovery was attained in 4 cases, while II degree in 10 cases, III degree in 6 cases, IV degree in 2 cases, V degree in 2 cases and VI degree in 1 case. Removing the lesions completely was the basic factor to the surgery of otogenic facial palsy, moreover, it was important to have facial nerve decompression soon after lesion removal.

  1. Facial identity and facial expression are initially integrated at visual perceptual stages of face processing.

    Science.gov (United States)

    Fisher, Katie; Towler, John; Eimer, Martin

    2016-01-08

    It is frequently assumed that facial identity and facial expression are analysed in functionally and anatomically distinct streams within the core visual face processing system. To investigate whether expression and identity interact during the visual processing of faces, we employed a sequential matching procedure where participants compared either the identity or the expression of two successively presented faces, and ignored the other irrelevant dimension. Repetitions versus changes of facial identity and expression were varied independently across trials, and event-related potentials (ERPs) were recorded during task performance. Irrelevant facial identity and irrelevant expression both interfered with performance in the expression and identity matching tasks. These symmetrical interference effects show that neither identity nor expression can be selectively ignored during face matching, and suggest that they are not processed independently. N250r components to identity repetitions that reflect identity matching mechanisms in face-selective visual cortex were delayed and attenuated when there was an expression change, demonstrating that facial expression interferes with visual identity matching. These findings provide new evidence for interactions between facial identity and expression within the core visual processing system, and question the hypothesis that these two attributes are processed independently. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Perceptually Valid Facial Expressions for Character-Based Applications

    Directory of Open Access Journals (Sweden)

    Ali Arya

    2009-01-01

    Full Text Available This paper addresses the problem of creating facial expression of mixed emotions in a perceptually valid way. The research has been done in the context of a “game-like” health and education applications aimed at studying social competency and facial expression awareness in autistic children as well as native language learning, but the results can be applied to many other applications such as games with need for dynamic facial expressions or tools for automating the creation of facial animations. Most existing methods for creating facial expressions of mixed emotions use operations like averaging to create the combined effect of two universal emotions. Such methods may be mathematically justifiable but are not necessarily valid from a perceptual point of view. The research reported here starts by user experiments aiming at understanding how people combine facial actions to express mixed emotions, and how the viewers perceive a set of facial actions in terms of underlying emotions. Using the results of these experiments and a three-dimensional emotion model, we associate facial actions to dimensions and regions in the emotion space, and create a facial expression based on the location of the mixed emotion in the three-dimensional space. We call these regionalized facial actions “facial expression units.”

  3. What's behind the mask? A look at blood flow changes with prolonged facial pressure and expression using laser Doppler imaging.

    Science.gov (United States)

    Van-Buendia, Lan B; Allely, Rebekah R; Lassiter, Ronald; Weinand, Christian; Jordan, Marion H; Jeng, James C

    2010-01-01

    Clinically, the initial blanching in burn scar seen on transparent plastic face mask application seems to diminish with time and movement requiring mask alteration. To date, studies quantifying perfusion with prolonged mask use do not exist. This study used laser Doppler imaging (LDI) to assess perfusion through the transparent face mask and movement in subjects with and without burn over time. Five subjects fitted with transparent face masks were scanned with the LDI on four occasions. The four subjects without burn were scanned in the following manner: 1) no mask, 2) mask on while at rest, 3) mask on with alternating intervals of sustained facial expression and rest, and 4) after mask removal. Images were acquired every 3 minutes throughout the 85-minute study period. The subject with burn underwent a shortened scanning protocol to increase comfort. Each face was divided into five regions of interest for analysis. Compared with baseline, mask application decreased perfusion significantly in all subjects (P mask removal, all regions of the face demonstrated a hyperemic effect with the chin (P = .05) and each cheek (P mask removal. Perfusions remain constantly low while wearing the face mask, despite changing facial expressions. Changing facial expressions with the mask on did not alter perfusion. Hyperemic response occurs on removal of the mask. This study exposed methodology and statistical issues worth considering when conducting future research with the face, pressure therapy, and with LDI technology.

  4. Human Response to Ductless Personalised Ventilation: Impact of Air Movement, Temperature and Cleanness on Eye Symptoms

    DEFF Research Database (Denmark)

    Dalewski, Mariusz; Fillon, Maelys; Bivolarova, Maria

    2013-01-01

    environment facially applied individually controlled air movement of room air, with or without local filtering, did not have significant impact on eye blink frequency and tear film quality. The local air movement and air cleaning resulted in increased eye blinking frequency and improvement of tear film......The performance of ductless personalized ventilation (DPV) in conjunction with displacement ventilation (DV) was studied in relation to peoples’ health, comfort and performance. This paper presents results on the impact of room air temperature, using of DPV and local air filtration on eye blink...

  5. Spatio-Temporal Pain Recognition in CNN-based Super-Resolved Facial Images

    DEFF Research Database (Denmark)

    Bellantonio, Marco; Haque, Mohammad Ahsanul; Rodriguez, Pau

    2017-01-01

    Automatic pain detection is a long expected solution to a prevalent medical problem of pain management. This is more relevant when the subject of pain is young children or patients with limited ability to communicate about their pain experience. Computer vision-based analysis of facial pain...

  6. Intraoperative facial motor evoked potentials monitoring with transcranial electrical stimulation for preservation of facial nerve function in patients with large acoustic neuroma

    Institute of Scientific and Technical Information of China (English)

    LIU Bai-yun; TIAN Yong-ji; LIU Wen; LIU Shu-ling; QIAO Hui; ZHANG Jun-ting; JIA Gui-jun

    2007-01-01

    Background Although various monitoring techniques have been used routinely in the treatment of the lesions in the skull base, iatrogenic facial paresis or paralysis remains a significant clinical problem. The aim of this study was to investigate the effect of intraoperative facial motor evoked potentials monitoring with transcranial electrical stimulation on preservation of facial nerve function.Method From January to November 2005, 19 patients with large acoustic neuroma were treated using intraoperative facial motor evoked potentials monitoring with transcranial electrical stimulation (TCEMEP) for preservation of facial nerve function. The relationship between the decrease of MEP amplitude after tumor removal and the postoperative function of the facial nerve was analyzed.Results MEP amplitude decreased more than 75% in 11 patients, of which 6 presented significant facial paralysis (H-B grade 3), and 5 had mild facial paralysis (H-B grade 2). In the other 8 patients, whose MEP amplitude decreased less than 75%, 1 experienced significant facial paralysis, 5 had mild facial paralysis, and 2 were normal.Conclusions Intraoperative TCEMEP can be used to predict postoperative function of the facial nerve. The decreased MEP amplitude above 75 % is an alarm point for possible severe facial paralysis.

  7. Role of temporal processing stages by inferior temporal neurons in facial recognition

    Directory of Open Access Journals (Sweden)

    Yasuko eSugase-Miyamoto

    2011-06-01

    Full Text Available In this review, we focus on the role of temporal stages of encoded facial information in the visual system, which might enable the efficient determination of species, identity, and expression. Facial recognition is an important function of our brain and is known to be processed in the ventral visual pathway, where visual signals are processed through areas V1, V2, V4, and the inferior temporal (IT cortex. In the IT cortex, neurons show selective responses to complex visual images such as faces, and at each stage along the pathway the stimulus selectivity of the neural responses becomes sharper, particularly in the later portion of the responses.In the IT cortex of the monkey, facial information is represented by different temporal stages of neural responses, as shown in our previous study: the initial transient response of face-responsive neurons represents information about global categories, i.e., human vs. monkey vs. simple shapes, whilst the later portion of these responses represents information about detailed facial categories, i.e., expression and/or identity. This suggests that the temporal stages of the neuronal firing pattern play an important role in the coding of visual stimuli, including faces. This type of coding may be a plausible mechanism underlying the temporal dynamics of recognition, including the process of detection/categorization followed by the identification of objects. Recent single-unit studies in monkeys have also provided evidence consistent with the important role of the temporal stages of encoded facial information. For example, view-invariant facial identity information is represented in the response at a later period within a region of face-selective neurons. Consistent with these findings, temporally modulated neural activity has also been observed in human studies. These results suggest a close correlation between the temporal processing stages of facial information by IT neurons and the temporal dynamics of

  8. What do facial expressions of emotion express in young children? The relationship between facial display and EMG measures

    Directory of Open Access Journals (Sweden)

    Michela Balconi

    2014-04-01

    Full Text Available The present paper explored the relationship between emotional facial response and electromyographic modulation in children when they observe facial expression of emotions. Facial responsiveness (evaluated by arousal and valence ratings and psychophysiological correlates (facial electromyography, EMG were analyzed when children looked at six facial expressions of emotions (happiness, anger, fear, sadness, surprise and disgust. About EMG measure, corrugator and zygomatic muscle activity was monitored in response to different emotional types. ANOVAs showed differences for both EMG and facial response across the subjects, as a function of different emotions. Specifically, some emotions were well expressed by all the subjects (such as happiness, anger and fear in terms of high arousal, whereas some others were less level arousal (such as sadness. Zygomatic activity was increased mainly for happiness, from one hand, corrugator activity was increased mainly for anger, fear and surprise, from the other hand. More generally, EMG and facial behavior were highly correlated each other, showing a “mirror” effect with respect of the observed faces.

  9. Facial Expression Recognition Through Machine Learning

    Directory of Open Access Journals (Sweden)

    Nazia Perveen

    2015-08-01

    Full Text Available Facial expressions communicate non-verbal cues which play an important role in interpersonal relations. Automatic recognition of facial expressions can be an important element of normal human-machine interfaces it might likewise be utilized as a part of behavioral science and in clinical practice. In spite of the fact that people perceive facial expressions for all intents and purposes immediately solid expression recognition by machine is still a challenge. From the point of view of automatic recognition a facial expression can be considered to comprise of disfigurements of the facial parts and their spatial relations or changes in the faces pigmentation. Research into automatic recognition of the facial expressions addresses the issues encompassing the representation and arrangement of static or dynamic qualities of these distortions or face pigmentation. We get results by utilizing the CVIPtools. We have taken train data set of six facial expressions of three persons and for train data set purpose we have total border mask sample 90 and 30 border mask sample for test data set purpose and we use RST- Invariant features and texture features for feature analysis and then classified them by using k- Nearest Neighbor classification algorithm. The maximum accuracy is 90.

  10. Multimodal movement prediction - towards an individual assistance of patients.

    Directory of Open Access Journals (Sweden)

    Elsa Andrea Kirchner

    Full Text Available Assistive devices, like exoskeletons or orthoses, often make use of physiological data that allow the detection or prediction of movement onset. Movement onset can be detected at the executing site, the skeletal muscles, as by means of electromyography. Movement intention can be detected by the analysis of brain activity, recorded by, e.g., electroencephalography, or in the behavior of the subject by, e.g., eye movement analysis. These different approaches can be used depending on the kind of neuromuscular disorder, state of therapy or assistive device. In this work we conducted experiments with healthy subjects while performing self-initiated and self-paced arm movements. While other studies showed that multimodal signal analysis can improve the performance of predictions, we show that a sensible combination of electroencephalographic and electromyographic data can potentially improve the adaptability of assistive technical devices with respect to the individual demands of, e.g., early and late stages in rehabilitation therapy. In earlier stages for patients with weak muscle or motor related brain activity it is important to achieve high positive detection rates to support self-initiated movements. To detect most movement intentions from electroencephalographic or electromyographic data motivates a patient and can enhance her/his progress in rehabilitation. In a later stage for patients with stronger muscle or brain activity, reliable movement prediction is more important to encourage patients to behave more accurately and to invest more effort in the task. Further, the false detection rate needs to be reduced. We propose that both types of physiological data can be used in an and combination, where both signals must be detected to drive a movement. By this approach the behavior of the patient during later therapy can be controlled better and false positive detections, which can be very annoying for patients who are further advanced in

  11. Aortic stentgraft movement detection using digital roentgen stereophotogrammetric analysis on plane film radiographs - initial results of a phantom study

    International Nuclear Information System (INIS)

    Georg, C.; Welker, V.; Eidam, H.; Alfke, H.

    2005-01-01

    Purpose: To evaluate the feasibility of aortic stentgraft micromovement detection using digital roentgen stereophotogrammetric analysis on plane film radiographs. Material and Methods: An aortic stentgraft used for demonstration purposes was marked with 10 tantalum markers of 0.8 mm in diameter. The stentgraft was placed on a Plexiglas phantom with 5 tantalum markers of 1 mm in diameter simulating a fixed segment needed for mathematical analysis. In a subsequent step, the stentgraft was placed onto an orthopaedic spine model to simulate in vivo conditions in a next step.Two radiographs taken simultaneously from different angles were used for simulating different stentgraft movement, e.g. translation, angulation, aortic pulsation and migration in the spine model. Movement of the stentgraft markers was analysed using a commercially available digital RSA setup (UmRSA registered 4.1, RSA Biomedical, Umea, Sweden). Results: Our study shows the feasibility of measuring aortic stentgraft movement and changes in stentgraft shape in the submillimeter range using digital roentgen stereophotogrammetric analysis. Translation along the 3 cardinal axes, change in stentgraft shape, simulation of aortic pulsation and simulation of in vivo conditions could be described precisely. Conclusion: Aortic stentgraft movement detection using digital roentgen stereophotogrammetric analysis on plane film radiographs is a very promising, precise method. (orig.)

  12. Traumatic facial nerve palsy: CT patterns of facial nerve canal fracture and correlation with clinical severity

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jae Cheol; Kim, Sang Joon; Park, Hyun Min; Lee, Young Suk; Lee, Jee Young [College of Medicine, Dankook Univ., Chonan (Korea, Republic of)

    2002-07-01

    To analyse the patterns of facial nerve canal injury seen at temporal bone computed tomography (CT) in patients with traumatic facial nerve palsy and to correlate these with clinical manifestations and outcome. Thirty cases of temporal bone CT in 29 patients with traumatic facial nerve palsy were analyzed with regard to the patterns of facial nerve canal involvement. The patterns were correlated with clinical grade, the electroneurographic (ENoG) findings, and clinical outcome. For clinical grading, the House-Brackmann scale was used, as follows:grade I-IV, partial palsy group; grade V-VI, complete palsy group. The electroneuronographic findings were categorized as mild to moderate (below 90%) or severe (90% and over) degeneration. In 25 cases, the bony wall of the facial nerve canals was involved directly (direct finding): discontinuity of the bony wall was onted in 22 cases, bony spicules in ten, and bony wall displacement in five. Indirect findings were canal widening in nine cases and adjacent bone fracture in two. In one case, there were no direct or indirect findings. All cases in which there was complete palsy (n=8) showed one or more direct findings including spicules in six, while in the incomplete palsy group (n=22), 17 cases showed direct findings. In the severe degeneration group (n=13), on ENog, 12 cases demonstrated direct findings, including spicules in nine cases. In 24 patients, symptoms of facial palsy showed improvement at follow up evaluation. Four of the five patients in whom symptoms did not improve had spicules. Among ten patients with spicules, five underwent surgery and symptoms improved in four of these; among the five patients not operated on , symptoms did not improve in three. In most patients with facial palsy after temporal bone injury, temporal bone CT revealed direct or indirect facial nerve canal involvement, and in complete palsy or severe degeneration groups, there were direct findings in most cases. We believe that meticulous

  13. Intratemporal Facial Nerve Paralysis- A Three Year Study

    Directory of Open Access Journals (Sweden)

    Anirban Ghosh

    2016-08-01

    Full Text Available Introduction This study on intratemporal facial paralysis is an attempt to understand the aetiology of facial nerve paralysis, effect of different management protocols and the outcome after long-term follow-up. Materials and Methods A prospective longitudinal study was conducted from September 2005 to August 2008 at the Department of Otorhinolaryngology of a medical college in Kolkata comprising 50 patients of intratemporal facial palsy. All cases were periodically followed up for at least 6 months and their prognostic outcome along with different treatment options were analyzed. Result Among different causes of facial palsy, Bell’s palsy is the commonest cause; whereas cholesteatoma and granulation were common findings in otogenic facial palsy. Traumatic facial palsies were exclusively due to longitudinal fracture of temporal bone running through geniculate ganglion. Herpes zoster oticus and neoplasia related facial palsies had significantly poorer outcome. Discussion Otogenic facial palsy showed excellent outcome after mastoid exploration and facial decompression. Transcanal decompression was performed in traumatic facial palsies showing inadequate recovery. Complete removal of cholesteatoma over dehiscent facial nerve gave better postoperative recovery. Conclusion The stapedial reflex test is the most objective and reproducible of all topodiagnostic tests. Return of the stapedial reflex within 3 weeks of injury indicates good prognosis. Bell’s palsy responded well to conservative measures. All traumatic facial palsies were due to longitudinal fracture and 2/3rd of these patients showed favourable outcome with medical therapy.

  14. A visible light imaging device for cardiac rate detection with reduced effect of body movement

    Science.gov (United States)

    Jiang, Xiaotian; Liu, Ming; Zhao, Yuejin

    2014-09-01

    A visible light imaging system to detect human cardiac rate is proposed in this paper. A color camera and several LEDs, acting as lighting source, were used to avoid the interference of ambient light. From people's forehead, the cardiac rate could be acquired based on photoplethysmography (PPG) theory. The template matching method was used after the capture of video. The video signal was discomposed into three signal channels (RGB) and the region of interest was chosen to take the average gray value. The green channel signal could provide an excellent waveform of pulse wave on the account of green lights' absorptive characteristics of blood. Through the fast Fourier transform, the cardiac rate was exactly achieved. But the research goal was not just to achieve the cardiac rate accurately. With the template matching method, the effects of body movement are reduced to a large extent, therefore the pulse wave can be detected even while people are in the moving state and the waveform is largely optimized. Several experiments are conducted on volunteers, and the results are compared with the ones gained by a finger clamped pulse oximeter. The contrast results between these two ways are exactly agreeable. This method to detect the cardiac rate and the pulse wave largely reduces the effects of body movement and can probably be widely used in the future.

  15. Computer-analyzed facial expression as a surrogate marker for autism spectrum social core symptoms.

    Directory of Open Access Journals (Sweden)

    Keiho Owada

    Full Text Available To develop novel interventions for autism spectrum disorder (ASD core symptoms, valid, reliable, and sensitive longitudinal outcome measures are required for detecting symptom change over time. Here, we tested whether a computerized analysis of quantitative facial expression measures could act as a marker for core ASD social symptoms. Facial expression intensity values during a semi-structured socially interactive situation extracted from the Autism Diagnostic Observation Schedule (ADOS were quantified by dedicated software in 18 high-functioning adult males with ASD. Controls were 17 age-, gender-, parental socioeconomic background-, and intellectual level-matched typically developing (TD individuals. Statistical analyses determined whether values representing the strength and variability of each facial expression element differed significantly between the ASD and TD groups and whether they correlated with ADOS reciprocal social interaction scores. Compared with the TD controls, facial expressions in the ASD group appeared more "Neutral" (d = 1.02, P = 0.005, PFDR 0.05 with lower variability in Happy expression (d = 1.10, P = 0.003, PFDR < 0.05. Moreover, the stronger Neutral facial expressions in the ASD participants were positively correlated with poorer ADOS reciprocal social interaction scores (ρ = 0.48, P = 0.042. These findings indicate that our method for quantitatively measuring reduced facial expressivity during social interactions can be a promising marker for core ASD social symptoms.

  16. Computer-analyzed facial expression as a surrogate marker for autism spectrum social core symptoms.

    Science.gov (United States)

    Owada, Keiho; Kojima, Masaki; Yassin, Walid; Kuroda, Miho; Kawakubo, Yuki; Kuwabara, Hitoshi; Kano, Yukiko; Yamasue, Hidenori

    2018-01-01

    To develop novel interventions for autism spectrum disorder (ASD) core symptoms, valid, reliable, and sensitive longitudinal outcome measures are required for detecting symptom change over time. Here, we tested whether a computerized analysis of quantitative facial expression measures could act as a marker for core ASD social symptoms. Facial expression intensity values during a semi-structured socially interactive situation extracted from the Autism Diagnostic Observation Schedule (ADOS) were quantified by dedicated software in 18 high-functioning adult males with ASD. Controls were 17 age-, gender-, parental socioeconomic background-, and intellectual level-matched typically developing (TD) individuals. Statistical analyses determined whether values representing the strength and variability of each facial expression element differed significantly between the ASD and TD groups and whether they correlated with ADOS reciprocal social interaction scores. Compared with the TD controls, facial expressions in the ASD group appeared more "Neutral" (d = 1.02, P = 0.005, PFDR Neutral expression (d = 1.08, P = 0.003, PFDR 0.05) with lower variability in Happy expression (d = 1.10, P = 0.003, PFDR Neutral facial expressions in the ASD participants were positively correlated with poorer ADOS reciprocal social interaction scores (ρ = 0.48, P = 0.042). These findings indicate that our method for quantitatively measuring reduced facial expressivity during social interactions can be a promising marker for core ASD social symptoms.

  17. Social Use of Facial Expressions in Hylobatids

    Science.gov (United States)

    Scheider, Linda; Waller, Bridget M.; Oña, Leonardo; Burrows, Anne M.; Liebal, Katja

    2016-01-01

    Non-human primates use various communicative means in interactions with others. While primate gestures are commonly considered to be intentionally and flexibly used signals, facial expressions are often referred to as inflexible, automatic expressions of affective internal states. To explore whether and how non-human primates use facial expressions in specific communicative interactions, we studied five species of small apes (gibbons) by employing a newly established Facial Action Coding System for hylobatid species (GibbonFACS). We found that, despite individuals often being in close proximity to each other, in social (as opposed to non-social contexts) the duration of facial expressions was significantly longer when gibbons were facing another individual compared to non-facing situations. Social contexts included grooming, agonistic interactions and play, whereas non-social contexts included resting and self-grooming. Additionally, gibbons used facial expressions while facing another individual more often in social contexts than non-social contexts where facial expressions were produced regardless of the attentional state of the partner. Also, facial expressions were more likely ‘responded to’ by the partner’s facial expressions when facing another individual than non-facing. Taken together, our results indicate that gibbons use their facial expressions differentially depending on the social context and are able to use them in a directed way in communicative interactions with other conspecifics. PMID:26978660

  18. Enhanced MRI in patients with facial palsy

    International Nuclear Information System (INIS)

    Yanagida, Masahiro; Kato, Tsutomu; Ushiro, Koichi; Kitajiri, Masanori; Yamashita, Toshio; Kumazawa, Tadami; Tanaka, Yoshimasa

    1991-01-01

    We performed Gd-DTPA-enhanced magnetic resonance imaging (MRI) examinations at several stages in 40 patients with peripheral facial nerve palsy (Bell's palsy and Ramsay-Hunt syndrome). In 38 of the 40 patients, one and more enhanced region could be seen in certain portion of the facial nerve in the temporal bone on the affected side, whereas no enhanced regions were seen on the intact side. Correlations between the timing of the MRI examination and the location of the enhanced regions were analysed. In all 6 patients examined by MRI within 5 days after the onset of facial nerve palsy, enhanced regions were present in the meatal portion. In 3 of the 8 patients (38%) examined by MRI 6 to 10 days after the onset of facial palsy, enhanced areas were seen in both the meatal and labyrinthine portions. In 8 of the 9 patients (89%) tested 11 to 20 days after the onset of palsy, the vertical portion was enhanced. In the 12 patients examined by MRI 21 to 40 days after the onset of facial nerve palsy, the meatal portion was not enhanced while the labyrinthine portion, the horizontal portion and the vertical portion were enhanced in 5 (42%), 8 (67%) and 11 (92%), respectively. Enhancement in the vertical portion was observed in all 5 patients examined more than 41 days after the onset of facial palsy. These results suggest that the central portion of the facial nerve in the temporal bone tends to be enhanced in the early stage of facial nerve palsy, while the peripheral portion is enhanced in the late stage. These changes of Gd-DTPA enhanced regions in the facial nerve may suggest dromic degeneration of the facial nerve in peripheral facial nerve palsy. (author)

  19. Influence of gravity upon some facial signs.

    Science.gov (United States)

    Flament, F; Bazin, R; Piot, B

    2015-06-01

    Facial clinical signs and their integration are the basis of perception than others could have from ourselves, noticeably the age they imagine we are. Facial modifications in motion and their objective measurements before and after application of skin regimen are essential to go further in evaluation capacities to describe efficacy in facial dynamics. Quantification of facial modifications vis à vis gravity will allow us to answer about 'control' of facial shape in daily activities. Standardized photographs of the faces of 30 Caucasian female subjects of various ages (24-73 year) were successively taken at upright and supine positions within a short time interval. All these pictures were therefore reframed - any bias due to facial features was avoided when evaluating one single sign - for clinical quotation by trained experts of several facial signs regarding published standardized photographic scales. For all subjects, the supine position increased facial width but not height, giving a more fuller appearance to the face. More importantly, the supine position changed the severity of facial ageing features (e.g. wrinkles) compared to an upright position and whether these features were attenuated or exacerbated depended on their facial location. Supine station mostly modifies signs of the lower half of the face whereas those of the upper half appear unchanged or slightly accentuated. These changes appear much more marked in the older groups, where some deep labial folds almost vanish. These alterations decreased the perceived ages of the subjects by an average of 3.8 years. Although preliminary, this study suggests that a 90° rotation of the facial skin vis à vis gravity induces rapid rearrangements among which changes in tensional forces within and across the face, motility of interstitial free water among underlying skin tissue and/or alterations of facial Langer lines, likely play a significant role. © 2015 Society of Cosmetic Scientists and the Société Fran

  20. [Idiopathic facial paralysis in children].

    Science.gov (United States)

    Achour, I; Chakroun, A; Ayedi, S; Ben Rhaiem, Z; Mnejja, M; Charfeddine, I; Hammami, B; Ghorbel, A

    2015-05-01

    Idiopathic facial palsy is the most common cause of facial nerve palsy in children. Controversy exists regarding treatment options. The objectives of this study were to review the epidemiological and clinical characteristics as well as the outcome of idiopathic facial palsy in children to suggest appropriate treatment. A retrospective study was conducted on children with a diagnosis of idiopathic facial palsy from 2007 to 2012. A total of 37 cases (13 males, 24 females) with a mean age of 13.9 years were included in this analysis. The mean duration between onset of Bell's palsy and consultation was 3 days. Of these patients, 78.3% had moderately severe (grade IV) or severe paralysis (grade V on the House and Brackmann grading). Twenty-seven patients were treated in an outpatient context, three patients were hospitalized, and seven patients were treated as outpatients and subsequently hospitalized. All patients received corticosteroids. Eight of them also received antiviral treatment. The complete recovery rate was 94.6% (35/37). The duration of complete recovery was 7.4 weeks. Children with idiopathic facial palsy have a very good prognosis. The complete recovery rate exceeds 90%. However, controversy exists regarding treatment options. High-quality studies have been conducted on adult populations. Medical treatment based on corticosteroids alone or combined with antiviral treatment is certainly effective in improving facial function outcomes in adults. In children, the recommendation for prescription of steroids and antiviral drugs based on adult treatment appears to be justified. Randomized controlled trials in the pediatric population are recommended to define a strategy for management of idiopathic facial paralysis. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  1. Reconstruction of facial nerve injuries in children.

    Science.gov (United States)

    Fattah, Adel; Borschel, Gregory H; Zuker, Ron M

    2011-05-01

    Facial nerve trauma is uncommon in children, and many spontaneously recover some function; nonetheless, loss of facial nerve activity leads to functional impairment of ocular and oral sphincters and nasal orifice. In many cases, the impediment posed by facial asymmetry and reduced mimetic function more significantly affects the child's psychosocial interactions. As such, reconstruction of the facial nerve affords great benefits in quality of life. The therapeutic strategy is dependent on numerous factors, including the cause of facial nerve injury, the deficit, the prognosis for recovery, and the time elapsed since the injury. The options for treatment include a diverse range of surgical techniques including static lifts and slings, nerve repairs, nerve grafts and nerve transfers, regional, and microvascular free muscle transfer. We review our strategies for addressing facial nerve injuries in children.

  2. Vibration analysis method for detection of abnormal movement of material in a rotary dissolver

    International Nuclear Information System (INIS)

    Smith, C.M.; Fry, D.N.

    1978-11-01

    Vibration signals generated by the movement of simulated nuclear fuel material through a three-stage, continuous, rotary dissolver were frequency analyzed to determine whether these signals contained characteristic signal patterns that would identify each of five phases of operation in the dissolver and, thus, would indicate the proper movement of material through the dissolver. This characterization of the signals is the first step in the development of a system for monitoring the flow of material through a dissolver to be developed for reprocessing spent nuclear fuel. Vibration signals from accelerometers mounted on the dissolver roller supports were analyzed in a bandwidth from 0 to 10 kHz. The analysis established that (1) all five phases of dissolver operation can be characterized by vibration signatures; (2) four of the five phases of operation can be readily and directly identified by a characteristic vibration signature during continuous, prototypic operation; (3) the transfer of material from the inlet to the dissolution stage can be indirectly monitored by one of the other four vibration signatures (the mixing signature) during prototypic operation; (4) a simulated blockage between the dissolution and exit stages can be detected by changes in one or more characteristic vibration signatures; and (5) a simulated blockage of the exit chute cannot be detected

  3. An Improved Surface Simplification Method for Facial Expression Animation Based on Homogeneous Coordinate Transformation Matrix and Maximum Shape Operator

    Directory of Open Access Journals (Sweden)

    Juin-Ling Tseng

    2016-01-01

    Full Text Available Facial animation is one of the most popular 3D animation topics researched in recent years. However, when using facial animation, a 3D facial animation model has to be stored. This 3D facial animation model requires many triangles to accurately describe and demonstrate facial expression animation because the face often presents a number of different expressions. Consequently, the costs associated with facial animation have increased rapidly. In an effort to reduce storage costs, researchers have sought to simplify 3D animation models using techniques such as Deformation Sensitive Decimation and Feature Edge Quadric. The studies conducted have examined the problems in the homogeneity of the local coordinate system between different expression models and in the retainment of simplified model characteristics. This paper proposes a method that applies Homogeneous Coordinate Transformation Matrix to solve the problem of homogeneity of the local coordinate system and Maximum Shape Operator to detect shape changes in facial animation so as to properly preserve the features of facial expressions. Further, root mean square error and perceived quality error are used to compare the errors generated by different simplification methods in experiments. Experimental results show that, compared with Deformation Sensitive Decimation and Feature Edge Quadric, our method can not only reduce the errors caused by simplification of facial animation, but also retain more facial features.

  4. Processing of unattended facial emotions: a visual mismatch negativity study.

    Science.gov (United States)

    Stefanics, Gábor; Csukly, Gábor; Komlósi, Sarolta; Czobor, Pál; Czigler, István

    2012-02-01

    Facial emotions express our internal states and are fundamental in social interactions. Here we explore whether the repetition of unattended facial emotions builds up a predictive representation of frequently encountered emotions in the visual system. Participants (n=24) were presented peripherally with facial stimuli expressing emotions while they performed a visual detection task presented in the center of the visual field. Facial stimuli consisted of four faces of different identity, but expressed the same emotion (happy or fearful). Facial stimuli were presented in blocks of oddball sequence (standard emotion: p=0.9, deviant emotion: p=0.1). Event-related potentials (ERPs) to the same emotions were compared when the emotions were deviant and standard, respectively. We found visual mismatch negativity (vMMN) responses to unattended deviant emotions in the 170-360 ms post-stimulus range over bilateral occipito-temporal sites. Our results demonstrate that information about the emotional content of unattended faces presented at the periphery of the visual field is rapidly processed and stored in a predictive memory representation by the visual system. We also found evidence that differential processing of deviant fearful faces starts already at 70-120 ms after stimulus onset. This finding shows a 'negativity bias' under unattended conditions. Differential processing of fearful deviants were more pronounced in the right hemisphere in the 195-275 ms and 360-390 ms intervals, whereas processing of happy deviants evoked larger differential response in the left hemisphere in the 360-390 ms range, indicating differential hemispheric specialization for automatic processing of positive and negative affect. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Culture, gender and health care stigma: Practitioners' response to facial masking experienced by people with Parkinson's disease.

    Science.gov (United States)

    Tickle-Degnen, Linda; Zebrowitz, Leslie A; Ma, Hui-ing

    2011-07-01

    Facial masking in Parkinson's disease is the reduction of automatic and controlled expressive movement of facial musculature, creating an appearance of apathy, social disengagement or compromised cognitive status. Research in western cultures demonstrates that practitioners form negatively biased impressions associated with patient masking. Socio-cultural norms about facial expressivity vary according to culture and gender, yet little research has studied the effect of these factors on practitioners' responses toward patients who vary in facial expressivity. This study evaluated the effect of masking, culture and gender on practitioners' impressions of patient psychological attributes. Practitioners (N = 284) in the United States and Taiwan judged 12 Caucasian American and 12 Asian Taiwanese women and men patients in video clips from interviews. Half of each patient group had a moderate degree of facial masking and the other half had near-normal expressivity. Practitioners in both countries judged patients with higher masking to be more depressed and less sociable, less socially supportive, and less cognitively competent than patients with lower masking. Practitioners were more biased by masking when judging the sociability of the American patients, and American practitioners' judgments of patient sociability were more negatively biased in response to masking than were those of Taiwanese practitioners. Practitioners were more biased by masking when judging the cognitive competence and social supportiveness of the Taiwanese patients, and Taiwanese practitioners' judgments of patient cognitive competence were more negatively biased in response to masking than were those of American practitioners. The negative response to higher masking was stronger in practitioner judgments of women than men patients, particularly American patients. The findings suggest local cultural values as well as ethnic and gender stereotypes operate on practitioners' use of facial

  6. Gd-DTPA-enhanced MR imaging in facial nerve paralysis

    International Nuclear Information System (INIS)

    Tien, R.D.; Dillon, W.P.

    1989-01-01

    GD-DTPA-enhanced MR imaging was used to evaluate 11 patients with facial nerve paralysis (five acute idiopathic facial palsy (Bell palsy), three chronic recurrent facial palsy, one acute facial palsy after local radiation therapy, one chronic facial dyskinesia, and one facial neuroma). In eight of 11 patients, there was marked enhancement of the infratemporal facial nerve from the labyrinthine segment to the stylomastoid foramen. Two patients had additional contrast enhancement in the internal auditory canal segment. In one patient, enhancement persisted (but to a lesser degree) 8 weeks after symptoms had resolved. In one patient, no enhancement was seen 15 months after resolution of Bell palsy. The facial neuroma was seen as a focal nodular enhancement in the mastoid segment of the facial nerve

  7. Pain and disgust: the facial signaling of two aversive bodily experiences.

    Directory of Open Access Journals (Sweden)

    Miriam Kunz

    Full Text Available The experience of pain and disgust share many similarities, given that both are aversive experiences resulting from bodily threat and leading to defensive reactions. The aim of the present study was to investigate whether facial expressions are distinct enough to encode the specific quality of pain and disgust or whether they just encode the similar negative valence and arousal level of both states. In sixty participants pain and disgust were induced by heat stimuli and pictures, respectively. Facial responses (Facial Action Coding System as well as subjective responses were assessed. Our main findings were that nearly the same single facial actions were elicited during pain and disgust experiences. However, these single facial actions were displayed with different strength and were differently combined depending on whether pain or disgust was experienced. Whereas pain was mostly encoded by contraction of the muscles surrounding the eyes (by itself or in combination with contraction of the eyebrows; disgust was mainly accompanied by contraction of the eyebrows and--in contrast to pain--by raising of the upper lip as well as the combination of upper lip raise and eyebrow contraction. Our data clearly suggests that facial expressions seem to be distinct enough to encode not only the general valence and arousal associated with these two bodily aversive experiences, namely pain and disgust, but also the specific origin of the threat to the body. This implies that the differential decoding of these two states by an observer is possible without additional verbal or contextual information, which is of special interest for clinical practice, given that raising awareness in observers about these distinct differences could help to improve the detection of pain in patients who are not able to provide a self-report of pain (e.g., patients with dementia.

  8. Facial Animations: Future Research Directions & Challenges

    Science.gov (United States)

    Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Rehman, Amjad; Basori, Ahmad Hoirul

    2014-06-01

    Nowadays, computer facial animation is used in a significant multitude fields that brought human and social to study the computer games, films and interactive multimedia reality growth. Authoring the computer facial animation, complex and subtle expressions are challenging and fraught with problems. As a result, the current most authored using universal computer animation techniques often limit the production quality and quantity of facial animation. With the supplement of computer power, facial appreciative, software sophistication and new face-centric methods emerging are immature in nature. Therefore, this paper concentrates to define and managerially categorize current and emerged surveyed facial animation experts to define the recent state of the field, observed bottlenecks and developing techniques. This paper further presents a real-time simulation model of human worry and howling with detail discussion about their astonish, sorrow, annoyance and panic perception.

  9. Ultraestrutura do nervo facial intratemporal em pacientes com paralisia facial idiopática: estudo de evidências de infecção viral Intratemporal facial nerve ultrastructure in patients with idiopathic facial paralysis: viral infection evidence study

    Directory of Open Access Journals (Sweden)

    Rosangela Aló Maluza Florez

    2010-10-01

    Full Text Available A etiologia da paralisia facial periférica idiopática (PFPI ainda é uma incógnita, no entanto, alguns autores aventam a possibilidade de ser uma infecção viral. OBJETIVO: Analisar a ultraestrutura do nervo facial procurando evidências virais que possam nos fornecer dados etiológicos. MATERIAL E MÉTODO: Foram estudados 20 pacientes com PFP, com graus de moderado a severo, de ambos os sexos, entre 18-60 anos, provenientes de Ambulatório de Distúrbios do Nervo Facial. Os pacientes foram divididos em dois grupos: Estudo, onze pacientes com PFPI e Controle, nove pacientes com Paralisia Facial Periférica Traumática ou Tumoral. Foram estudados fragmentos de bainha do nervo facial ou fragmentos de seus cotos, que durante a cirurgia de reparação do nervo facial, seriam desprezados ou encaminhados para estudo anatomopatológico. O tecido foi fixado em glutaraldeído 2% e analisado em Microscopia Eletrônica de Transmissão. RESULTADO: Observamos no grupo estudo atividade celular intensa de reparação com aumento de fibras colágenas, fibroblastos com organelas desenvolvidas, isentos de partículas virais. No grupo controle esta atividade de reparação não foi evidente, mas também não foram observadas partículas virais. CONCLUSÃO: Não foram encontradas partículas virais, no entanto, houve evidências de intensa atividade de reparação ou infecção viral.The etiology of idiopathic peripheral facial palsy (IPFP is still uncertain; however, some authors suggest the possibility of a viral infection. AIM: to analyze the ultrastructure of the facial nerve seeking viral evidences that might provide etiological data. MATERIAL AND METHODS: We studied 20 patients with peripheral facial palsy (PFP, with moderate to severe FP, of both genders, between 18-60 years of age, from the Clinic of Facial Nerve Disorders. The patients were broken down into two groups - Study: eleven patients with IPFP and Control: nine patients with trauma or tumor

  10. Evolution of facial color pattern complexity in lemurs.

    Science.gov (United States)

    Rakotonirina, Hanitriniaina; Kappeler, Peter M; Fichtel, Claudia

    2017-11-09

    Interspecific variation in facial color patterns across New and Old World primates has been linked to species recognition and group size. Because group size has opposite effects on interspecific variation in facial color patterns in these two radiations, a study of the third large primate radiation may shed light on convergences and divergences in this context. We therefore compiled published social and ecological data and analyzed facial photographs of 65 lemur species to categorize variation in hair length, hair and skin coloration as well as color brightness. Phylogenetically controlled analyses revealed that group size and the number of sympatric species did not influence the evolution of facial color complexity in lemurs. Climatic factors, however, influenced facial color complexity, pigmentation and hair length in a few facial regions. Hair length in two facial regions was also correlated with group size and may facilitate individual recognition. Since phylogenetic signals were moderate to high for most models, genetic drift may have also played a role in the evolution of facial color patterns of lemurs. In conclusion, social factors seem to have played only a subordinate role in the evolution of facial color complexity in lemurs, and, more generally, group size appears to have no systematic functional effect on facial color complexity across all primates.

  11. Detecting elementary arm movements by tracking upper limb joint angles with MARG sensors

    OpenAIRE

    Mazomenos, Evangelos B.; Biswas, Dwaipayan; Cranny, Andy; Rajan, Amal; Maharatna, Koushik; Achner, Josy; Klemke, Jasmin; Jobges, Michael; Ortmann, Steffen; Langendorfer, Peter

    2015-01-01

    This paper reports an algorithm for the detection of three elementary upper limb movements, i.e., reach and retrieve, bend the arm at the elbow and rotation of the arm about the long axis. We employ two MARG sensors, attached at the elbow and wrist, from which the kinematic properties (joint angles, position) of the upper arm and forearm are calculated through data fusion using a quaternion-based gradient-descent method and a two-link model of the upper limb. By studying the kinematic pattern...

  12. Facial detection using deep learning

    Science.gov (United States)

    Sharma, Manik; Anuradha, J.; Manne, H. K.; Kashyap, G. S. C.

    2017-11-01

    In the recent past, we have observed that Facebook has developed an uncanny ability to recognize people in photographs. Previously, we had to tag people in photos by clicking on them and typing their name. Now as soon as we upload a photo, Facebook tags everyone on its own. Facebook can recognize faces with 98% accuracy which is pretty much as good as humans can do. This technology is called Face Detection. Face detection is a popular topic in biometrics. We have surveillance cameras in public places for video capture as well as security purposes. The main advantages of this algorithm over other are uniqueness and approval. We need speed and accuracy to identify. But face detection is really a series of several related problems: First, look at a picture and find all the faces in it. Second, focus on each face and understand that even if a face is turned in a weird direction or in bad lighting, it is still the same person. Third select features which can be used to identify each face uniquely like size of the eyes, face etc. Finally, compare these features to data we have to find the person name. As a human, your brain is wired to do all of this automatically and instantly. In fact, humans are too good at recognizing faces. Computers are not capable of this kind of high-level generalization, so we must teach them how to do each step in this process separately. The growth of face detection is largely driven by growing applications such as credit card verification, surveillance video images, authentication for banking and security system access.

  13. Emotional Intelligence and Mismatching Expressive and Verbal Messages: A Contribution to Detection of Deception

    Science.gov (United States)

    Wojciechowski, Jerzy; Stolarski, Maciej; Matthews, Gerald

    2014-01-01

    Processing facial emotion, especially mismatches between facial and verbal messages, is believed to be important in the detection of deception. For example, emotional leakage may accompany lying. Individuals with superior emotion perception abilities may then be more adept in detecting deception by identifying mismatch between facial and verbal messages. Two personal factors that may predict such abilities are female gender and high emotional intelligence (EI). However, evidence on the role of gender and EI in detection of deception is mixed. A key issue is that the facial processing skills required to detect deception may not be the same as those required to identify facial emotion. To test this possibility, we developed a novel facial processing task, the FDT (Face Decoding Test) that requires detection of inconsistencies between facial and verbal cues to emotion. We hypothesized that gender and ability EI would be related to performance when cues were inconsistent. We also hypothesized that gender effects would be mediated by EI, because women tend to score as more emotionally intelligent on ability tests. Data were collected from 210 participants. Analyses of the FDT suggested that EI was correlated with superior face decoding in all conditions. We also confirmed the expected gender difference, the superiority of high EI individuals, and the mediation hypothesis. Also, EI was more strongly associated with facial decoding performance in women than in men, implying there may be gender differences in strategies for processing affective cues. It is concluded that integration of emotional and cognitive cues may be a core attribute of EI that contributes to the detection of deception. PMID:24658500

  14. Emotional intelligence and mismatching expressive and verbal messages: a contribution to detection of deception.

    Directory of Open Access Journals (Sweden)

    Jerzy Wojciechowski

    Full Text Available Processing facial emotion, especially mismatches between facial and verbal messages, is believed to be important in the detection of deception. For example, emotional leakage may accompany lying. Individuals with superior emotion perception abilities may then be more adept in detecting deception by identifying mismatch between facial and verbal messages. Two personal factors that may predict such abilities are female gender and high emotional intelligence (EI. However, evidence on the role of gender and EI in detection of deception is mixed. A key issue is that the facial processing skills required to detect deception may not be the same as those required to identify facial emotion. To test this possibility, we developed a novel facial processing task, the FDT (Face Decoding Test that requires detection of inconsistencies between facial and verbal cues to emotion. We hypothesized that gender and ability EI would be related to performance when cues were inconsistent. We also hypothesized that gender effects would be mediated by EI, because women tend to score as more emotionally intelligent on ability tests. Data were collected from 210 participants. Analyses of the FDT suggested that EI was correlated with superior face decoding in all conditions. We also confirmed the expected gender difference, the superiority of high EI individuals, and the mediation hypothesis. Also, EI was more strongly associated with facial decoding performance in women than in men, implying there may be gender differences in strategies for processing affective cues. It is concluded that integration of emotional and cognitive cues may be a core attribute of EI that contributes to the detection of deception.

  15. Control de accesos mediante reconocimiento facial

    OpenAIRE

    Rodríguez Rodríguez, Bruno

    2011-01-01

    En esta memoria expone el trabajo que se ha llevado a cabo para intentar crear un sistema de reconocimiento facial. This paper outlines the work carried out in the attempt of creating a facial recognition system. En aquesta memòria exposa el treball que s'ha dut a terme en l'intent de crear un sistema de reconeixement facial.

  16. SparCLeS: dynamic l₁ sparse classifiers with level sets for robust beard/moustache detection and segmentation.

    Science.gov (United States)

    Le, T Hoang Ngan; Luu, Khoa; Savvides, Marios

    2013-08-01

    Robust facial hair detection and segmentation is a highly valued soft biometric attribute for carrying out forensic facial analysis. In this paper, we propose a novel and fully automatic system, called SparCLeS, for beard/moustache detection and segmentation in challenging facial images. SparCLeS uses the multiscale self-quotient (MSQ) algorithm to preprocess facial images and deal with illumination variation. Histogram of oriented gradients (HOG) features are extracted from the preprocessed images and a dynamic sparse classifier is built using these features to classify a facial region as either containing skin or facial hair. A level set based approach, which makes use of the advantages of both global and local information, is then used to segment the regions of a face containing facial hair. Experimental results demonstrate the effectiveness of our proposed system in detecting and segmenting facial hair regions in images drawn from three databases, i.e., the NIST Multiple Biometric Grand Challenge (MBGC) still face database, the NIST Color Facial Recognition Technology FERET database, and the Labeled Faces in the Wild (LFW) database.

  17. Dermal fillers for facial soft tissue augmentation.

    Science.gov (United States)

    Dastoor, Sarosh F; Misch, Carl E; Wang, Hom-Lay

    2007-01-01

    Nowadays, patients are demanding not only enhancement to their dental (micro) esthetics, but also their overall facial (macro) esthetics. Soft tissue augmentation via dermal filling agents may be used to correct facial defects such as wrinkles caused by age, gravity, and trauma; thin lips; asymmetrical facial appearances; buccal fold depressions; and others. This article will review the pathogenesis of facial wrinkles, history, techniques, materials, complications, and clinical controversies regarding dermal fillers for soft tissue augmentation.

  18. Cosmetic Detection Framework for Face and Iris Biometrics

    Directory of Open Access Journals (Sweden)

    Omid Sharifi

    2018-04-01

    Full Text Available Cosmetics pose challenges to the recognition performance of face and iris biometric systems due to its ability to alter natural facial and iris patterns. Facial makeup and iris contact lens are considered to be commonly applied cosmetics for the face and iris in this study. The present work aims to present a novel solution for the detection of cosmetics in both face and iris biometrics by the fusion of texture, shape and color descriptors of images. The proposed cosmetic detection scheme combines the microtexton information from the local primitives of texture descriptors with the color spaces achieved from overlapped blocks in order to achieve better detection of spots, flat areas, edges, edge ends, curves, appearance and colors. The proposed cosmetic detection scheme was applied to the YMU YouTube makeup database (YMD facial makeup database and IIIT-Delhi Contact Lens iris database. The results demonstrate that the proposed cosmetic detection scheme is significantly improved compared to the other schemes implemented in this study.

  19. Attention modulates sensory suppression during back movements.

    Science.gov (United States)

    Van Hulle, Lore; Juravle, Georgiana; Spence, Charles; Crombez, Geert; Van Damme, Stefaan

    2013-06-01

    Tactile perception is often impaired during movement. The present study investigated whether such sensory suppression also occurs during back movements, and whether this would be modulated by attention. In two tactile detection experiments, participants simultaneously engaged in a movement task, in which they executed a back-bending movement, and a perceptual task, consisting of the detection of subtle tactile stimuli administered to their upper or lower back. The focus of participants' attention was manipulated by raising the probability that one of the back locations would be stimulated. The results revealed that tactile detection was suppressed during the execution of the back movements. Furthermore, the results of Experiment 2 revealed that when the stimulus was always presented to the attended location, tactile suppression was substantially reduced, suggesting that sensory suppression can be modulated by top-down attentional processes. The potential of this paradigm for studying tactile information processing in clinical populations is discussed. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Nablus mask-like facial syndrome

    DEFF Research Database (Denmark)

    Allanson, Judith; Smith, Amanda; Hare, Heather

    2012-01-01

    Nablus mask-like facial syndrome (NMLFS) has many distinctive phenotypic features, particularly tight glistening skin with reduced facial expression, blepharophimosis, telecanthus, bulky nasal tip, abnormal external ear architecture, upswept frontal hairline, and sparse eyebrows. Over the last few...